Jan 22 13:43:39 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 13:43:39 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:39 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 13:43:40 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 13:43:40 crc kubenswrapper[4769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 13:43:40 crc kubenswrapper[4769]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 13:43:40 crc kubenswrapper[4769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 13:43:40 crc kubenswrapper[4769]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 13:43:40 crc kubenswrapper[4769]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 13:43:40 crc kubenswrapper[4769]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.712847 4769 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716189 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716229 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716236 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716241 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716246 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716253 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716259 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716266 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716273 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716279 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716284 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716289 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716295 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716300 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716306 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716323 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716328 4769 feature_gate.go:330] unrecognized feature gate: Example Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716334 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716339 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716344 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716349 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716355 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716360 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716366 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716371 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716377 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716382 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716387 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716393 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716398 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716404 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716409 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716414 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716418 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716423 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716428 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716434 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716440 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716445 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716450 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716454 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716459 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716464 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716470 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716477 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716482 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716489 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716494 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716501 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716506 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716510 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716516 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716521 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716525 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716530 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716535 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716539 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716544 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716549 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716553 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716558 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716563 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716568 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716572 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716577 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716582 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716587 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716592 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716597 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716602 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.716607 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716877 4769 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716891 4769 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716902 4769 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716909 4769 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716917 4769 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716923 4769 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716933 4769 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716941 4769 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716947 4769 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716953 4769 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716960 4769 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716965 4769 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716971 4769 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716976 4769 flags.go:64] FLAG: --cgroup-root="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716982 4769 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716987 4769 flags.go:64] FLAG: --client-ca-file="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.716998 4769 flags.go:64] FLAG: --cloud-config="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717005 4769 flags.go:64] FLAG: --cloud-provider="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717010 4769 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717016 4769 flags.go:64] FLAG: --cluster-domain="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717022 4769 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717028 4769 flags.go:64] FLAG: --config-dir="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717033 4769 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717039 4769 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717047 4769 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717052 4769 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717058 4769 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717064 4769 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717069 4769 flags.go:64] FLAG: --contention-profiling="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717075 4769 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717082 4769 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717088 4769 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717095 4769 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717102 4769 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717108 4769 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717114 4769 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717120 4769 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717127 4769 flags.go:64] FLAG: --enable-server="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717134 4769 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717144 4769 flags.go:64] FLAG: --event-burst="100" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717151 4769 flags.go:64] FLAG: --event-qps="50" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717158 4769 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717164 4769 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717169 4769 flags.go:64] FLAG: --eviction-hard="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717177 4769 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717182 4769 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717188 4769 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717193 4769 flags.go:64] FLAG: --eviction-soft="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717199 4769 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717205 4769 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717210 4769 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717216 4769 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717222 4769 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717228 4769 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717233 4769 flags.go:64] FLAG: --feature-gates="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717240 4769 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717246 4769 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717252 4769 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717259 4769 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717265 4769 flags.go:64] FLAG: --healthz-port="10248" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717271 4769 flags.go:64] FLAG: --help="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717277 4769 flags.go:64] FLAG: --hostname-override="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717282 4769 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717288 4769 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717293 4769 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717299 4769 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717304 4769 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717310 4769 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717315 4769 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717320 4769 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717326 4769 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717332 4769 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717337 4769 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717343 4769 flags.go:64] FLAG: --kube-reserved="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717349 4769 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717354 4769 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717360 4769 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717365 4769 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717371 4769 flags.go:64] FLAG: --lock-file="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717376 4769 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717382 4769 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717387 4769 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717397 4769 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717402 4769 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717408 4769 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717413 4769 flags.go:64] FLAG: --logging-format="text" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717419 4769 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717425 4769 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717431 4769 flags.go:64] FLAG: --manifest-url="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717436 4769 flags.go:64] FLAG: --manifest-url-header="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717444 4769 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717449 4769 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717456 4769 flags.go:64] FLAG: --max-pods="110" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717462 4769 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717468 4769 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717473 4769 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717479 4769 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717484 4769 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717490 4769 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717496 4769 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717513 4769 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717519 4769 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717525 4769 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717530 4769 flags.go:64] FLAG: --pod-cidr="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717536 4769 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717544 4769 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717553 4769 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717558 4769 flags.go:64] FLAG: --pods-per-core="0" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717564 4769 flags.go:64] FLAG: --port="10250" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717570 4769 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717575 4769 flags.go:64] FLAG: --provider-id="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717581 4769 flags.go:64] FLAG: --qos-reserved="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717586 4769 flags.go:64] FLAG: --read-only-port="10255" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717592 4769 flags.go:64] FLAG: --register-node="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717597 4769 flags.go:64] FLAG: --register-schedulable="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717603 4769 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717612 4769 flags.go:64] FLAG: --registry-burst="10" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717618 4769 flags.go:64] FLAG: --registry-qps="5" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717624 4769 flags.go:64] FLAG: --reserved-cpus="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717629 4769 flags.go:64] FLAG: --reserved-memory="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717637 4769 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717643 4769 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717649 4769 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717655 4769 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717661 4769 flags.go:64] FLAG: --runonce="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717666 4769 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717672 4769 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717678 4769 flags.go:64] FLAG: --seccomp-default="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717684 4769 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717689 4769 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717695 4769 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717701 4769 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717707 4769 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717712 4769 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717717 4769 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717723 4769 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717728 4769 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717735 4769 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717745 4769 flags.go:64] FLAG: --system-cgroups="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717751 4769 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717761 4769 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717766 4769 flags.go:64] FLAG: --tls-cert-file="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717772 4769 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717779 4769 flags.go:64] FLAG: --tls-min-version="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717785 4769 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717816 4769 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717822 4769 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717828 4769 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717833 4769 flags.go:64] FLAG: --v="2" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717841 4769 flags.go:64] FLAG: --version="false" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717848 4769 flags.go:64] FLAG: --vmodule="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717854 4769 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.717861 4769 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718026 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718034 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718039 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718044 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718049 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718055 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718059 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718067 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718073 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718080 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718086 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718092 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718098 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718104 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718109 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718114 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718119 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718125 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718130 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718135 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718140 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718145 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718149 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718154 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718159 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718164 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718169 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718174 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718179 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718184 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718190 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718196 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718201 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718206 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718212 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718218 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718224 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718233 4769 feature_gate.go:330] unrecognized feature gate: Example Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718238 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718243 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718249 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718254 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718259 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718265 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718270 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718274 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718279 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718285 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718290 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718295 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718300 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718305 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718310 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718315 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718320 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718325 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718330 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718335 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718340 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718346 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718352 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718358 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718364 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718369 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718375 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718380 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718385 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718391 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718396 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718403 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.718408 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.718417 4769 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.732859 4769 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.732908 4769 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733067 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733081 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733086 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733093 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733098 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733104 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733109 4769 feature_gate.go:330] unrecognized feature gate: Example Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733114 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733119 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733124 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733129 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733134 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733139 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733144 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733148 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733154 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733159 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733164 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733168 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733174 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733179 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733184 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733189 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733194 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733199 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733204 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733208 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733216 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733225 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733231 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733237 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733243 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733250 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733259 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733265 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733271 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733276 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733282 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733287 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733292 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733296 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733301 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733307 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733312 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733317 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733322 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733327 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733333 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733339 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733345 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733350 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733356 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733361 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733367 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733372 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733376 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733383 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733389 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733397 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733402 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733408 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733414 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733419 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733425 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733430 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733437 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733443 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733449 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733455 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733461 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733466 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.733475 4769 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733646 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733657 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733663 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733668 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733673 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733678 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733683 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733689 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733693 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733698 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733703 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733708 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733714 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733719 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733724 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733728 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733733 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733738 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733743 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733748 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733753 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733758 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733763 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733768 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733773 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733780 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733816 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733826 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733832 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733839 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733845 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733851 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733857 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733863 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733869 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733874 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733879 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733886 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733893 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733902 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733908 4769 feature_gate.go:330] unrecognized feature gate: Example Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733913 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733918 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733925 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733932 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733937 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733943 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733948 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733953 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733957 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733962 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733967 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733972 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733977 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733982 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733987 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733991 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.733996 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734001 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734007 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734014 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734021 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734026 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734032 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734038 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734044 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734049 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734054 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734059 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734064 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.734069 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.734077 4769 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.734590 4769 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.738171 4769 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.738295 4769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.739325 4769 server.go:997] "Starting client certificate rotation" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.739364 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.739599 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-22 15:16:05.523069156 +0000 UTC Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.739734 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.748391 4769 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.750678 4769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.752479 4769 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.761247 4769 log.go:25] "Validated CRI v1 runtime API" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.782592 4769 log.go:25] "Validated CRI v1 image API" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.784471 4769 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.787807 4769 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-13-39-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.787917 4769 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.810017 4769 manager.go:217] Machine: {Timestamp:2026-01-22 13:43:40.807707041 +0000 UTC m=+0.218817060 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a3bb8776-1087-4679-a96f-5f1347bd430e BootID:c179e315-653f-44a2-90da-146c8bca7b57 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8a:c1:7b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8a:c1:7b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:9d:58:90 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e0:53:8f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2e:96:19 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8b:20:e1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5e:84:a8:ad:5b:83 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f2:7e:ee:fc:47:81 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.810527 4769 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.810908 4769 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.812121 4769 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.812460 4769 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.812531 4769 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.812904 4769 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.812925 4769 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.813250 4769 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.813307 4769 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.813571 4769 state_mem.go:36] "Initialized new in-memory state store" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.813869 4769 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.814894 4769 kubelet.go:418] "Attempting to sync node with API server" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.814931 4769 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.814974 4769 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.815005 4769 kubelet.go:324] "Adding apiserver pod source" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.815025 4769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.817008 4769 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.817479 4769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.817514 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.817560 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.817698 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.817651 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818266 4769 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818871 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818900 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818910 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818920 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818933 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818941 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818951 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818965 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818976 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.818986 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.819019 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.819028 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.819652 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.820192 4769 server.go:1280] "Started kubelet" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.820506 4769 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.820507 4769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.820736 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.822106 4769 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 13:43:40 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.823363 4769 server.go:460] "Adding debug handlers to kubelet server" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.823606 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.823637 4769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.823985 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:30:27.487330519 +0000 UTC Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.824081 4769 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.824096 4769 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.824116 4769 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.824105 4769 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.824754 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.824988 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.823181 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d117487e317f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:43:40.820158454 +0000 UTC m=+0.231268393,LastTimestamp:2026-01-22 13:43:40.820158454 +0000 UTC m=+0.231268393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.825854 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.826107 4769 factory.go:55] Registering systemd factory Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.826134 4769 factory.go:221] Registration of the systemd container factory successfully Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.827839 4769 factory.go:153] Registering CRI-O factory Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.827922 4769 factory.go:221] Registration of the crio container factory successfully Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.828125 4769 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.828430 4769 factory.go:103] Registering Raw factory Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.828506 4769 manager.go:1196] Started watching for new ooms in manager Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.832616 4769 manager.go:319] Starting recovery of all containers Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.839841 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840296 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840318 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840374 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840390 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840403 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840417 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840429 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840443 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840460 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840475 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840491 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840503 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840519 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840532 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840545 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840559 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840572 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840587 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840601 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840615 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840630 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840645 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840659 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840676 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840691 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840710 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840727 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840743 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840759 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840775 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840829 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840864 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840880 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840894 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840907 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840920 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840934 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840949 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840962 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840975 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.840992 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841007 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841023 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841037 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841053 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841067 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841082 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841096 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841108 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841122 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841135 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841206 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841224 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841239 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841253 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841267 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841281 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841294 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841308 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841323 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841336 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841349 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841362 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841377 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841391 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841404 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841416 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841429 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841442 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841454 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841467 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841480 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841494 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841506 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841520 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841535 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841548 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841564 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841579 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841591 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841605 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841617 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841631 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841644 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841658 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841670 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.841684 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842273 4769 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842301 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842317 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842333 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842346 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842359 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842372 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842384 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842395 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842409 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842422 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842435 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842448 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842461 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842476 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842489 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842503 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842522 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842550 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842565 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842579 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842592 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842608 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842623 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842636 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842650 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842666 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842739 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842755 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842768 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842781 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842814 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842830 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842843 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842857 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842873 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842886 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842900 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842914 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842929 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842942 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842953 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842967 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842980 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.842994 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843007 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843022 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843036 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843049 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843062 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843076 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843090 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843104 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843117 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843129 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843146 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843160 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843174 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843188 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843201 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843214 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843228 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843240 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843280 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843298 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843313 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843328 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843343 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843359 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843373 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843387 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843400 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843413 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843426 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843440 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843453 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843466 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843481 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843494 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843507 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843520 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843533 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843547 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843561 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843575 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843588 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843603 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843616 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843629 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843641 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843656 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843669 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843681 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843694 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843708 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843722 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843736 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843749 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843762 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843777 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843806 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843822 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843835 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843851 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843864 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843879 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843892 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843904 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843917 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843930 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843943 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843956 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843968 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843982 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.843995 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844012 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844025 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844039 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844054 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844068 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844080 4769 reconstruct.go:97] "Volume reconstruction finished" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.844090 4769 reconciler.go:26] "Reconciler: start to sync state" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.852136 4769 manager.go:324] Recovery completed Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.868268 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.871924 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.871972 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.871987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.873016 4769 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.873053 4769 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.873094 4769 state_mem.go:36] "Initialized new in-memory state store" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.879707 4769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.881923 4769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.882002 4769 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.882047 4769 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.882130 4769 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 13:43:40 crc kubenswrapper[4769]: W0122 13:43:40.882675 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.882750 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.917754 4769 policy_none.go:49] "None policy: Start" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.918813 4769 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.918859 4769 state_mem.go:35] "Initializing new in-memory state store" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.924556 4769 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.978959 4769 manager.go:334] "Starting Device Plugin manager" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.979120 4769 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.979142 4769 server.go:79] "Starting device plugin registration server" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.979703 4769 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.979728 4769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.979992 4769 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.980242 4769 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.980265 4769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.982237 4769 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.982336 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.983506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.983563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.983576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.983840 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.984298 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.984369 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.984938 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.985006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.985019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.985117 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.985329 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.985392 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.985979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986034 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986192 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986606 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986664 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.986989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.987017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.987054 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.987185 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.987678 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.987720 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988408 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988426 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.988696 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.989664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.989689 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:40 crc kubenswrapper[4769]: I0122 13:43:40.989701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:40 crc kubenswrapper[4769]: E0122 13:43:40.993458 4769 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.027445 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.047753 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.047874 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.047919 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.047953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.047990 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048026 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048096 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048166 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048205 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048294 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048341 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048423 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.048477 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.080420 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.082089 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.082127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.082138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.082165 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.082628 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150237 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150457 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150478 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150493 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150610 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150635 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150652 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150714 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150729 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150731 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150811 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150696 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150759 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150555 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150576 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150695 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150992 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.150915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151011 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151028 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151037 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151043 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151079 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151075 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.151148 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.283747 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.285171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.285240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.285262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.285303 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.286112 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.312506 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.319199 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.341977 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5ee6165ffbd53eace8722cf8a68635f34a769f0f1bfb40509ec9428f2eea50fc WatchSource:0}: Error finding container 5ee6165ffbd53eace8722cf8a68635f34a769f0f1bfb40509ec9428f2eea50fc: Status 404 returned error can't find the container with id 5ee6165ffbd53eace8722cf8a68635f34a769f0f1bfb40509ec9428f2eea50fc Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.344272 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.347321 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-a26daac5608affeee8e0a49f45ce025052fcb46c8a36eb17a1e967eb0c512f78 WatchSource:0}: Error finding container a26daac5608affeee8e0a49f45ce025052fcb46c8a36eb17a1e967eb0c512f78: Status 404 returned error can't find the container with id a26daac5608affeee8e0a49f45ce025052fcb46c8a36eb17a1e967eb0c512f78 Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.358697 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-970c81e74317513189c3d6dbf9e3f39f65fabff05490d94e4e1520ce8785ed8a WatchSource:0}: Error finding container 970c81e74317513189c3d6dbf9e3f39f65fabff05490d94e4e1520ce8785ed8a: Status 404 returned error can't find the container with id 970c81e74317513189c3d6dbf9e3f39f65fabff05490d94e4e1520ce8785ed8a Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.358976 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.370140 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.393027 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-adb8d183f03f094ef72c0257610ca897a63609a2a27f4bd3e33a9b1373ba9ed7 WatchSource:0}: Error finding container adb8d183f03f094ef72c0257610ca897a63609a2a27f4bd3e33a9b1373ba9ed7: Status 404 returned error can't find the container with id adb8d183f03f094ef72c0257610ca897a63609a2a27f4bd3e33a9b1373ba9ed7 Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.429288 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.686985 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.688425 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.688459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.688472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.688495 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.688941 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.758573 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.758639 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.822535 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.824767 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:26:34.266921511 +0000 UTC Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.890480 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.890684 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"adb8d183f03f094ef72c0257610ca897a63609a2a27f4bd3e33a9b1373ba9ed7"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.893089 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" exitCode=0 Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.893174 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.893252 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8da3abaf40ce80849b68bb742b2d4e7d405339c4bf50f29cef6ee1865a565bfd"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.893422 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.896752 4769 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c8e31b29c1c4da39b2854e1750a906e380a822c602e2b7a24158ee582ba95627" exitCode=0 Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.896905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c8e31b29c1c4da39b2854e1750a906e380a822c602e2b7a24158ee582ba95627"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.897007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"970c81e74317513189c3d6dbf9e3f39f65fabff05490d94e4e1520ce8785ed8a"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.897402 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.898201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.898350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.898373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.900902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.900964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.900981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.901943 4769 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59" exitCode=0 Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.902056 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.902099 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a26daac5608affeee8e0a49f45ce025052fcb46c8a36eb17a1e967eb0c512f78"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.902280 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.902803 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.903231 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.903258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.903281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904163 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904521 4769 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166" exitCode=0 Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904556 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904580 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5ee6165ffbd53eace8722cf8a68635f34a769f0f1bfb40509ec9428f2eea50fc"} Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.904662 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.905467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.905486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:41 crc kubenswrapper[4769]: I0122 13:43:41.905496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.967392 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.967513 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:41 crc kubenswrapper[4769]: W0122 13:43:41.991754 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:41 crc kubenswrapper[4769]: E0122 13:43:41.991942 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:42 crc kubenswrapper[4769]: E0122 13:43:42.230330 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 22 13:43:42 crc kubenswrapper[4769]: W0122 13:43:42.456897 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.50:6443: connect: connection refused Jan 22 13:43:42 crc kubenswrapper[4769]: E0122 13:43:42.457627 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.50:6443: connect: connection refused" logger="UnhandledError" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.489830 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.492964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.493253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.493268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.493531 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:43:42 crc kubenswrapper[4769]: E0122 13:43:42.495447 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.50:6443: connect: connection refused" node="crc" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.824917 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:17:34.284219073 +0000 UTC Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.851975 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.914987 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.915033 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.915039 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.915117 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.915869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.915902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.915913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.918456 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.918518 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.918540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.918559 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.920957 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d0315382a0b43a2b3069391b3c63464c38b94daf1baf2700f5001abca332fc53"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.921073 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.921748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.921773 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.921783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.923500 4769 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43" exitCode=0 Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.923518 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.923624 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.924256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.924286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.924296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.927439 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.927494 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.927505 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9"} Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.927598 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.928417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.928462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:42 crc kubenswrapper[4769]: I0122 13:43:42.928479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.825962 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:57:25.628082369 +0000 UTC Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.932598 4769 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440" exitCode=0 Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.932747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440"} Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.933080 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.934833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.934912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.934936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.939465 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d"} Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.939557 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.939577 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.944175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.944205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.944250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.944266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.944283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:43 crc kubenswrapper[4769]: I0122 13:43:43.944297 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.089611 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.095767 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.097176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.097215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.097228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.097282 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.826076 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:43:54.060699675 +0000 UTC Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.850418 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.850589 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.852313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.852380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.852394 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.937340 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.946040 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f"} Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.946070 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.946080 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d"} Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.946119 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f"} Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.946203 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.947059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.947110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.947122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.947122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.947151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:44 crc kubenswrapper[4769]: I0122 13:43:44.947162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.441981 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.826199 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:56:53.846432522 +0000 UTC Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.952268 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79"} Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.952323 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f"} Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.952349 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.952397 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.953261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.953294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.953308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.953616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.953641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:45 crc kubenswrapper[4769]: I0122 13:43:45.953654 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.435586 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.435831 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.437525 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.437593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.437612 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.826947 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:14:44.47276907 +0000 UTC Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.955505 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.955572 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.957230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.957232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.957333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.957293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.957554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:46 crc kubenswrapper[4769]: I0122 13:43:46.959061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:47 crc kubenswrapper[4769]: I0122 13:43:47.227392 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 13:43:47 crc kubenswrapper[4769]: I0122 13:43:47.827339 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:41:44.419655019 +0000 UTC Jan 22 13:43:47 crc kubenswrapper[4769]: I0122 13:43:47.959934 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:47 crc kubenswrapper[4769]: I0122 13:43:47.961107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:47 crc kubenswrapper[4769]: I0122 13:43:47.961181 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:47 crc kubenswrapper[4769]: I0122 13:43:47.961205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.459630 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.459904 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.461568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.461642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.461668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.828239 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 06:49:37.577022152 +0000 UTC Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.870716 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.871002 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.873326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.873405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:48 crc kubenswrapper[4769]: I0122 13:43:48.873434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.389268 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.389611 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.391569 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.391619 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.391639 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.397759 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.436492 4769 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.436625 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.829099 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:51:11.981660415 +0000 UTC Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.966300 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.967525 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.967602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.967628 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.977295 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.977500 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.978605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.978650 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:49 crc kubenswrapper[4769]: I0122 13:43:49.978667 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:50 crc kubenswrapper[4769]: I0122 13:43:50.829487 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:50:44.682634531 +0000 UTC Jan 22 13:43:50 crc kubenswrapper[4769]: E0122 13:43:50.994376 4769 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 13:43:51 crc kubenswrapper[4769]: I0122 13:43:51.830277 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:52:13.36891372 +0000 UTC Jan 22 13:43:52 crc kubenswrapper[4769]: I0122 13:43:52.822933 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 13:43:52 crc kubenswrapper[4769]: I0122 13:43:52.831437 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:31:30.819323494 +0000 UTC Jan 22 13:43:52 crc kubenswrapper[4769]: E0122 13:43:52.853022 4769 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 13:43:53 crc kubenswrapper[4769]: E0122 13:43:53.831457 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 22 13:43:53 crc kubenswrapper[4769]: I0122 13:43:53.831510 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:55:20.938225797 +0000 UTC Jan 22 13:43:53 crc kubenswrapper[4769]: W0122 13:43:53.832715 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 22 13:43:53 crc kubenswrapper[4769]: I0122 13:43:53.832869 4769 trace.go:236] Trace[1514421227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 13:43:43.831) (total time: 10001ms): Jan 22 13:43:53 crc kubenswrapper[4769]: Trace[1514421227]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:43:53.832) Jan 22 13:43:53 crc kubenswrapper[4769]: Trace[1514421227]: [10.001413906s] [10.001413906s] END Jan 22 13:43:53 crc kubenswrapper[4769]: E0122 13:43:53.832901 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 13:43:53 crc kubenswrapper[4769]: I0122 13:43:53.912347 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 13:43:53 crc kubenswrapper[4769]: I0122 13:43:53.912410 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 13:43:53 crc kubenswrapper[4769]: I0122 13:43:53.918213 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 13:43:53 crc kubenswrapper[4769]: I0122 13:43:53.918417 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.102228 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.102510 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.104050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.104097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.104107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.813293 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.813387 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.832584 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:55:13.536652413 +0000 UTC Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.937975 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 13:43:54 crc kubenswrapper[4769]: I0122 13:43:54.938038 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.451218 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.451494 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.451979 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.452048 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.453205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.453268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.453284 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.456510 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.832849 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 19:07:44.715091818 +0000 UTC Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.980589 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.981182 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.981247 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.981762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.981842 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:43:55 crc kubenswrapper[4769]: I0122 13:43:55.981860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:43:56 crc kubenswrapper[4769]: I0122 13:43:56.833441 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:59:30.687044128 +0000 UTC Jan 22 13:43:57 crc kubenswrapper[4769]: I0122 13:43:57.026104 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 13:43:57 crc kubenswrapper[4769]: I0122 13:43:57.042771 4769 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 13:43:57 crc kubenswrapper[4769]: I0122 13:43:57.392496 4769 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:57 crc kubenswrapper[4769]: I0122 13:43:57.834074 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:53:12.509278225 +0000 UTC Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.361903 4769 csr.go:261] certificate signing request csr-sgvn2 is approved, waiting to be issued Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.371314 4769 csr.go:257] certificate signing request csr-sgvn2 is issued Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.834933 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:55:06.850702272 +0000 UTC Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.891954 4769 trace.go:236] Trace[1962280142]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 13:43:43.896) (total time: 14995ms): Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[1962280142]: ---"Objects listed" error: 14995ms (13:43:58.891) Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[1962280142]: [14.995360076s] [14.995360076s] END Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.891999 4769 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.892619 4769 trace.go:236] Trace[2111744768]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 13:43:44.350) (total time: 14542ms): Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2111744768]: ---"Objects listed" error: 14541ms (13:43:58.892) Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2111744768]: [14.542023122s] [14.542023122s] END Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.892667 4769 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.893366 4769 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.894680 4769 trace.go:236] Trace[2098567548]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 13:43:44.115) (total time: 14779ms): Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2098567548]: ---"Objects listed" error: 14779ms (13:43:58.894) Jan 22 13:43:58 crc kubenswrapper[4769]: Trace[2098567548]: [14.779246253s] [14.779246253s] END Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.894712 4769 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 13:43:58 crc kubenswrapper[4769]: E0122 13:43:58.897034 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.935138 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:58 crc kubenswrapper[4769]: I0122 13:43:58.941375 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:58 crc kubenswrapper[4769]: E0122 13:43:58.995467 4769 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.372373 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 13:38:58 +0000 UTC, rotation deadline is 2026-12-15 23:09:48.04233864 +0000 UTC Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.372445 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7857h25m48.669901516s for next certificate rotation Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.825499 4769 apiserver.go:52] "Watching apiserver" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.827395 4769 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.827935 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828336 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828415 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:43:59 crc kubenswrapper[4769]: E0122 13:43:59.828630 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:43:59 crc kubenswrapper[4769]: E0122 13:43:59.828660 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.828725 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.829035 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.829067 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:43:59 crc kubenswrapper[4769]: E0122 13:43:59.829113 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.829915 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.831040 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.831874 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.831907 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.832196 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.832337 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.832430 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.833777 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.835284 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 03:01:00.07186535 +0000 UTC Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.835375 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.858097 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.869616 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.881677 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.894656 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.904418 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.916655 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.925011 4769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.928667 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.942661 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.959313 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.991232 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.993114 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d" exitCode=255 Jan 22 13:43:59 crc kubenswrapper[4769]: I0122 13:43:59.993206 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d"} Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000431 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000479 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000541 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000565 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000773 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000834 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000892 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000964 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.000991 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001267 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001336 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001944 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.001974 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002525 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002460 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002611 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002967 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003034 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003143 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.002638 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003227 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003254 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003514 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.003917 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.004868 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.004885 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005085 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005370 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005444 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005481 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005550 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005577 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005624 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005645 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005654 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005672 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005701 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005810 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005839 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005848 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005888 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.005916 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006019 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006097 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006123 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006178 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006230 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006258 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006351 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006354 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006408 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006435 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006456 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006477 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006498 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006544 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006565 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006586 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006631 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006655 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006683 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006708 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006733 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006758 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006783 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006859 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006895 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006920 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006943 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006970 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006995 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007019 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007041 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007061 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007081 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007103 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007128 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007151 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007173 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007194 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006381 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006498 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006514 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006758 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006786 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006941 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.006960 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007086 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007118 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007211 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007216 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.007221 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.507198997 +0000 UTC m=+19.918308926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007374 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007395 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007418 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007449 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007616 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007633 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007832 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.007983 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008064 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008100 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008156 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008179 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008266 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008291 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.008488 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009559 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009585 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009844 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.009967 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010037 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010298 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010385 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010405 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010438 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010461 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010486 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010534 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010554 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010578 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010581 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010602 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010653 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010680 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010680 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010706 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010762 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010809 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010888 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010936 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010953 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010971 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010990 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011010 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011028 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011049 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011084 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011103 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011122 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011157 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011198 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011233 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011256 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011272 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011353 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011370 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011387 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011404 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011420 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011439 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011457 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011476 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011493 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011511 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011528 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011554 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011603 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011626 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011657 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011683 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011709 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011805 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011836 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011922 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011947 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011971 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011996 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012022 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012044 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012120 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012143 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012166 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012189 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012239 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012266 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012315 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012338 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012361 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012385 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012410 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012433 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012460 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012482 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012533 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012558 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012584 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012607 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012629 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012657 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012680 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012705 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010760 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012755 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010949 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.010960 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011042 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011191 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011437 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011455 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011648 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.011931 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012783 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012954 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012979 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013032 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013058 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013086 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013111 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013136 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013160 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013239 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013264 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013313 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013340 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013365 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013449 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013473 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013497 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013522 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013548 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013632 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013672 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013706 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013788 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013845 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013926 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014010 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014038 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014065 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014092 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014216 4769 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014235 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014252 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014267 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014283 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014297 4769 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014310 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014323 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014337 4769 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014350 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014362 4769 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014375 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014388 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014400 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014414 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014426 4769 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014439 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014452 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014465 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014479 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014493 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014506 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014518 4769 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014531 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014545 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014559 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014600 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014614 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014628 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014641 4769 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014653 4769 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014666 4769 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014681 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014695 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014709 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014724 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014738 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014754 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014778 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014807 4769 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014821 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014835 4769 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014849 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014863 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014875 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014887 4769 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014899 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014914 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014927 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014940 4769 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014953 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014965 4769 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014977 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014989 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015001 4769 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015013 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015026 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015038 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015051 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015064 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015077 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015091 4769 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015106 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015120 4769 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015132 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015145 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015159 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015171 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015183 4769 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016109 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012197 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.019230 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.012734 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013004 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013281 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013521 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.013781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014083 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.014629 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015461 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015858 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.015899 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016206 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016536 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016711 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016867 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016932 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.016936 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017086 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017131 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017163 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017296 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017309 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017620 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017641 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.017617 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018007 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018357 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018376 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018528 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018702 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018647 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018766 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018875 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.018964 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.019270 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.019807 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020182 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020259 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020346 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020674 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.020948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021014 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021041 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021188 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021544 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021577 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021659 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021774 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021902 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.021984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022069 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022147 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022306 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022410 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022553 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022691 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022960 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.022911 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.023235 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.023579 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.023780 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024309 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.024595 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.024648 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.524629429 +0000 UTC m=+19.935739358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024829 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.024958 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025670 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025726 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025764 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.025923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026175 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026409 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.026774 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.027149 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.027743 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.027785 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028181 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028221 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028207 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.028772 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029134 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029182 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029545 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.029614 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.030068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.030416 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031152 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031498 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031657 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.031861 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.032062 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.032523 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.032866 4769 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.033636 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.034681 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.035277 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.035897 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.036040 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.036945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.036953 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.037356 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.037467 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.038247 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.038284 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.038312 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.038441 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.538417083 +0000 UTC m=+19.949527022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.039248 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.039706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.040414 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.047316 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.047339 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.047529 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.055998 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056286 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056812 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.056939 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.057271 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.058055 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.058997 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.059147 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.059561 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.060018 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.061874 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.061928 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.061946 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062024 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062051 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062064 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062030 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.562005037 +0000 UTC m=+19.973114966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.062143 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:00.5621252 +0000 UTC m=+19.973235129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.063399 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.065266 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.065955 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.066007 4769 scope.go:117] "RemoveContainer" containerID="1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.067145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.067189 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.071104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.071220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.075688 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.081759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.097540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.099174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.107180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118441 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118536 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118605 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118686 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118697 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118709 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118750 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118763 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118776 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.118881 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119040 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119062 4769 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119075 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119089 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119359 4769 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119386 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119434 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119447 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119460 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119471 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119666 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119678 4769 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119690 4769 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119741 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119753 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.119764 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120029 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120051 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120066 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120080 4769 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120095 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120106 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120118 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120130 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120143 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120154 4769 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120166 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120178 4769 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120190 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120202 4769 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120213 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120224 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120235 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120246 4769 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120257 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120269 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120280 4769 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120302 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120316 4769 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120328 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120340 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120351 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120363 4769 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120374 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120385 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120397 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120409 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120420 4769 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120431 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120444 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120457 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120468 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120480 4769 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120492 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120506 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120518 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120530 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120541 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120552 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120564 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120576 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120592 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120603 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120615 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120626 4769 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120640 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120651 4769 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120663 4769 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120674 4769 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120686 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120700 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120711 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120722 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120734 4769 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120746 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120758 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120770 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.120782 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121520 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121533 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121548 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121573 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121587 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121598 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121611 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121623 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121634 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121646 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121657 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121669 4769 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121680 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121692 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121704 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121716 4769 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121727 4769 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.124029 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.121739 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138624 4769 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138642 4769 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138656 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138666 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138693 4769 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138701 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138710 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138719 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138728 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138737 4769 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138747 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138755 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138763 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138771 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138779 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138859 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138869 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138876 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138885 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138894 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138901 4769 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138909 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138917 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.138925 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.150409 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.150685 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.150884 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.151023 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.158645 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.171077 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.183948 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459 WatchSource:0}: Error finding container 73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459: Status 404 returned error can't find the container with id 73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459 Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.204892 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1 WatchSource:0}: Error finding container 4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1: Status 404 returned error can't find the container with id 4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.213856 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-x582x"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.214395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.215162 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-hwhw7"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.215500 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219306 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219372 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219321 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219449 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.219572 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223153 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223337 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223448 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.223363 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239358 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0af8746-c9f0-48e6-8a60-02fed286b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239411 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c8w6\" (UniqueName: \"kubernetes.io/projected/34fa095e-fc7f-431c-8421-1178e63721ac-kube-api-access-2c8w6\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239433 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0af8746-c9f0-48e6-8a60-02fed286b419-proxy-tls\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239455 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhgc5\" (UniqueName: \"kubernetes.io/projected/f0af8746-c9f0-48e6-8a60-02fed286b419-kube-api-access-bhgc5\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0af8746-c9f0-48e6-8a60-02fed286b419-rootfs\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.239505 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34fa095e-fc7f-431c-8421-1178e63721ac-hosts-file\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.279223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.304123 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.315995 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.325100 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340662 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c8w6\" (UniqueName: \"kubernetes.io/projected/34fa095e-fc7f-431c-8421-1178e63721ac-kube-api-access-2c8w6\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340699 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0af8746-c9f0-48e6-8a60-02fed286b419-proxy-tls\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340719 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0af8746-c9f0-48e6-8a60-02fed286b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340749 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhgc5\" (UniqueName: \"kubernetes.io/projected/f0af8746-c9f0-48e6-8a60-02fed286b419-kube-api-access-bhgc5\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0af8746-c9f0-48e6-8a60-02fed286b419-rootfs\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340827 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34fa095e-fc7f-431c-8421-1178e63721ac-hosts-file\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/34fa095e-fc7f-431c-8421-1178e63721ac-hosts-file\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.340960 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f0af8746-c9f0-48e6-8a60-02fed286b419-rootfs\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.341545 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f0af8746-c9f0-48e6-8a60-02fed286b419-mcd-auth-proxy-config\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.344716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f0af8746-c9f0-48e6-8a60-02fed286b419-proxy-tls\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.349091 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.359500 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhgc5\" (UniqueName: \"kubernetes.io/projected/f0af8746-c9f0-48e6-8a60-02fed286b419-kube-api-access-bhgc5\") pod \"machine-config-daemon-hwhw7\" (UID: \"f0af8746-c9f0-48e6-8a60-02fed286b419\") " pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.371730 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c8w6\" (UniqueName: \"kubernetes.io/projected/34fa095e-fc7f-431c-8421-1178e63721ac-kube-api-access-2c8w6\") pod \"node-resolver-x582x\" (UID: \"34fa095e-fc7f-431c-8421-1178e63721ac\") " pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.379964 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.396905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.418400 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.437821 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.454371 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.467423 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.478729 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.487767 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.499231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.509297 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.524274 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.536030 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.542288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.542363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.542397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542481 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542492 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.542449225 +0000 UTC m=+20.953559154 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542528 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.542514888 +0000 UTC m=+20.953624817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542630 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.542737 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.542716053 +0000 UTC m=+20.953826052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.546154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.550769 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x582x" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.560618 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.565264 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34fa095e_fc7f_431c_8421_1178e63721ac.slice/crio-6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36 WatchSource:0}: Error finding container 6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36: Status 404 returned error can't find the container with id 6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.567580 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.583585 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0af8746_c9f0_48e6_8a60_02fed286b419.slice/crio-cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51 WatchSource:0}: Error finding container cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51: Status 404 returned error can't find the container with id cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.642829 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.642924 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643064 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643106 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643122 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643136 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643158 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643180 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643195 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.643175862 +0000 UTC m=+21.054285811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: E0122 13:44:00.643263 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:01.643245933 +0000 UTC m=+21.054355872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.659891 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fclh4"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.660169 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-d9wdl"] Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.660692 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.661029 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.663036 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.663148 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.663240 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664012 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664266 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664849 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.664916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.685045 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.703575 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.721663 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.735761 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.739709 4769 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740119 4769 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740153 4769 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740141 4769 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740458 4769 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740841 4769 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.740899 4769 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.741016 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.50:40852->38.102.83.50:6443: use of closed network connection" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741331 4769 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741362 4769 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741383 4769 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741398 4769 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741419 4769 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741585 4769 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741743 4769 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741806 4769 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741836 4769 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741835 4769 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741874 4769 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741897 4769 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741910 4769 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741928 4769 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741954 4769 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741961 4769 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741978 4769 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.741970 4769 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743302 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-k8s-cni-cncf-io\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743334 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cnibin\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743353 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-system-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743369 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-os-release\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-daemon-config\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743402 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hprv8\" (UniqueName: \"kubernetes.io/projected/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-kube-api-access-hprv8\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743418 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-hostroot\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743444 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-multus\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743472 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743486 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-cnibin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8w9\" (UniqueName: \"kubernetes.io/projected/d4186e93-df8a-49d3-9068-c8b8acd05baa-kube-api-access-kk8w9\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743514 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-multus-certs\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743530 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-bin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743546 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-kubelet\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743559 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743573 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-os-release\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743589 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-system-cni-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743603 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743623 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-cni-binary-copy\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743636 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-netns\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-etc-kubernetes\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-conf-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.743698 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-socket-dir-parent\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.765989 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.790947 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.807103 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.822692 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.835444 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:17:21.611269991 +0000 UTC Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.837041 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845161 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-socket-dir-parent\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-k8s-cni-cncf-io\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845678 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-k8s-cni-cncf-io\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-socket-dir-parent\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845885 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cnibin\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.845708 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cnibin\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846104 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-system-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846428 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-os-release\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hprv8\" (UniqueName: \"kubernetes.io/projected/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-kube-api-access-hprv8\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-os-release\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846688 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-hostroot\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846447 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-system-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846771 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-daemon-config\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-multus\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846897 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-cnibin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-multus\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.846997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk8w9\" (UniqueName: \"kubernetes.io/projected/d4186e93-df8a-49d3-9068-c8b8acd05baa-kube-api-access-kk8w9\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847052 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-bin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-kubelet\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847097 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-multus-certs\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-os-release\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-system-cni-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847195 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-etc-kubernetes\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847230 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-cni-binary-copy\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847250 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-netns\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847266 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-conf-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847290 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-cnibin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847314 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-conf-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847335 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-os-release\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847339 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-system-cni-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847398 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-cni-bin\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847439 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-var-lib-kubelet\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847469 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-multus-certs\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847520 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-cni-dir\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847522 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-multus-daemon-config\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847733 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-hostroot\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-host-run-netns\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847899 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.848116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.848044 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-cni-binary-copy\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.847889 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d4186e93-df8a-49d3-9068-c8b8acd05baa-etc-kubernetes\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.848138 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d4186e93-df8a-49d3-9068-c8b8acd05baa-cni-binary-copy\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.849783 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.863779 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.891626 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.892180 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.892690 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.893493 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.894174 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.894743 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.896070 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.896641 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.897527 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.898176 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.900430 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.900988 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.902125 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.903145 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.903696 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.904695 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.905253 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.906271 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.906679 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.907267 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.907440 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk8w9\" (UniqueName: \"kubernetes.io/projected/d4186e93-df8a-49d3-9068-c8b8acd05baa-kube-api-access-kk8w9\") pod \"multus-fclh4\" (UID: \"d4186e93-df8a-49d3-9068-c8b8acd05baa\") " pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.907642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hprv8\" (UniqueName: \"kubernetes.io/projected/cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76-kube-api-access-hprv8\") pod \"multus-additional-cni-plugins-d9wdl\" (UID: \"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\") " pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.908338 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.908843 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.910099 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.910524 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.911543 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.912077 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.912650 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.913915 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.914490 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.914898 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.915415 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.919690 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.920374 4769 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.920482 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.922873 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.923676 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.924660 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.926168 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.926864 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.927836 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.928546 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.929628 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.930275 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.931423 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.931998 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.932093 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.933092 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.933546 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.934468 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.935043 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.936160 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.936652 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.937574 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.938204 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.939004 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.939924 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.940547 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.947909 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.971387 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.976531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.984090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fclh4" Jan 22 13:44:00 crc kubenswrapper[4769]: W0122 13:44:00.988898 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd0cf7bc_a4fc_4a12_aafc_28598fdd5d76.slice/crio-ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88 WatchSource:0}: Error finding container ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88: Status 404 returned error can't find the container with id ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88 Jan 22 13:44:00 crc kubenswrapper[4769]: I0122 13:44:00.989104 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.004026 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4bbd424fca5b4902629d61b3c58894a6957091689563e5f0b63f6bfd625de7c1"} Jan 22 13:44:01 crc kubenswrapper[4769]: W0122 13:44:01.005999 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4186e93_df8a_49d3_9068_c8b8acd05baa.slice/crio-6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8 WatchSource:0}: Error finding container 6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8: Status 404 returned error can't find the container with id 6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8 Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.012638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.012707 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.012723 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"cab2b1c1881b6a2660c7f4a16de8d4376d73a16ba7d5480a5446a254b2df9c51"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.015753 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x582x" event={"ID":"34fa095e-fc7f-431c-8421-1178e63721ac","Type":"ContainerStarted","Data":"5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.015925 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x582x" event={"ID":"34fa095e-fc7f-431c-8421-1178e63721ac","Type":"ContainerStarted","Data":"6b1a0696b8bf09d52e09fe15609feddd7054598d2bc82b07f1836e6309422f36"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.019504 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.019728 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"73082e32f399f2751384fafa16bc563373007b8c6310ad5597de02858cea9459"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.024849 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.027239 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.027294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.027305 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"72d69196859e0025d5f218cae9fe1ef484c08e68e44d261a30b1576c71ad4753"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.040748 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.041237 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.041765 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.043956 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.044333 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.044525 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.045549 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.046342 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.046348 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.051230 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.051605 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052514 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052543 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052576 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052689 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.052971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053068 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053140 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053196 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053300 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053338 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053368 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053398 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053425 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053499 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.053575 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.054464 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.057061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerStarted","Data":"ba13a6b2d87e58399adfdbecb9243ba037f19d350694f214dd00579482ef1d88"} Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.112993 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156353 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156415 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156461 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156493 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156511 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156542 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156579 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156594 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156612 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156652 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156686 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156730 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156761 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.156852 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157009 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157156 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157245 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157338 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157373 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157970 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.157925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158072 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158075 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158122 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158330 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.158843 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.159493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.165293 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.169335 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.206649 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"ovnkube-node-jrg8z\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.230258 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.277228 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.310983 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.326288 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.342045 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.358379 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.378573 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.394706 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.417885 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.438241 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.442466 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.476148 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.518103 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.558951 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.561384 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561571 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.561534882 +0000 UTC m=+22.972644801 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.561611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.561676 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561783 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561832 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561857 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.561849871 +0000 UTC m=+22.972959800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.561874 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.561864681 +0000 UTC m=+22.972974610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.599505 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.607596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.628671 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.662884 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.662965 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663089 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663130 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663135 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663144 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663156 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663170 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663223 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.663201113 +0000 UTC m=+23.074311042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.663245 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:03.663238074 +0000 UTC m=+23.074348003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.667638 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.707844 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.709415 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.727605 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.767910 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.787573 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.827378 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.827625 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.835875 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:50:58.412647706 +0000 UTC Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.875287 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.882280 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.882421 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.882461 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.882418 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.882567 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:01 crc kubenswrapper[4769]: E0122 13:44:01.882732 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.916442 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.929517 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.948016 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 13:44:01 crc kubenswrapper[4769]: I0122 13:44:01.987885 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.008011 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.039700 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.047867 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.067949 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.070997 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.071060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"6a37281e385959c7ee151c48162eaa01b371ce4fe79f3441940766a91ad77fb8"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.074318 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7" exitCode=0 Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.074374 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.076468 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" exitCode=0 Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.076509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.076555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"e2d3c55e05f15106417cacacd13bd2ff48a7d39f5b85eb5a6e946e2cf2413457"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.088014 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.098870 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.102657 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.128381 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.150755 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.201933 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.208318 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.233153 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.245924 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bqn6j"] Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.246360 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.250219 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268202 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16fc232a-07ad-4611-8612-7b1c3f784c14-serviceca\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fc232a-07ad-4611-8612-7b1c3f784c14-host\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268296 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwhl\" (UniqueName: \"kubernetes.io/projected/16fc232a-07ad-4611-8612-7b1c3f784c14-kube-api-access-2pwhl\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268560 4769 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.268964 4769 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.270968 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.289258 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.328776 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.332390 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.337923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.338418 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.347274 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.368626 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.368695 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369035 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pwhl\" (UniqueName: \"kubernetes.io/projected/16fc232a-07ad-4611-8612-7b1c3f784c14-kube-api-access-2pwhl\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369089 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16fc232a-07ad-4611-8612-7b1c3f784c14-serviceca\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369108 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fc232a-07ad-4611-8612-7b1c3f784c14-host\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.369158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/16fc232a-07ad-4611-8612-7b1c3f784c14-host\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.374159 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.387835 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.393981 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.397291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.407782 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.408167 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415235 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415278 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.415326 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.427685 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.428749 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: E0122 13:44:02.429019 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431279 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.431890 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/16fc232a-07ad-4611-8612-7b1c3f784c14-serviceca\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.466270 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.467598 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.504319 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pwhl\" (UniqueName: \"kubernetes.io/projected/16fc232a-07ad-4611-8612-7b1c3f784c14-kube-api-access-2pwhl\") pod \"node-ca-bqn6j\" (UID: \"16fc232a-07ad-4611-8612-7b1c3f784c14\") " pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.533702 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.536582 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.567300 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bqn6j" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.580724 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.616996 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637510 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.637569 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.658739 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.695498 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.743183 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745460 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.745471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.774874 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.815574 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.837048 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:30:58.990195912 +0000 UTC Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.848147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.854806 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.894739 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.937905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.950519 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:02Z","lastTransitionTime":"2026-01-22T13:44:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:02 crc kubenswrapper[4769]: I0122 13:44:02.978635 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:02Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.015808 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052803 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052852 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.052861 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.057459 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.081481 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bqn6j" event={"ID":"16fc232a-07ad-4611-8612-7b1c3f784c14","Type":"ContainerStarted","Data":"55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.081538 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bqn6j" event={"ID":"16fc232a-07ad-4611-8612-7b1c3f784c14","Type":"ContainerStarted","Data":"327fde5cbfec4910b000d0772fd70a5e06aec89502e45c3ffe43507237f307c3"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.084742 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe" exitCode=0 Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.084843 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090311 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090457 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090558 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090641 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.090836 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.102291 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.141118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157752 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157831 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.157880 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.173708 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.221911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262274 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.262658 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.295679 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.343082 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364552 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.364662 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.380861 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.416294 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.465961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.467826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.510757 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.542380 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569409 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569419 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.569443 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.574894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.582172 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.582276 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.582254794 +0000 UTC m=+26.993364723 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.582715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.582876 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.582973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.583249 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.583190528 +0000 UTC m=+26.994300487 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.583299 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.583453 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.583426564 +0000 UTC m=+26.994536523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.614889 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.657773 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671572 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671619 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.671640 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.684482 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.684546 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684671 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684677 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684699 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684710 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684714 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684726 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684773 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.684754297 +0000 UTC m=+27.095864226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.684816 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:07.684783498 +0000 UTC m=+27.095893427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.694994 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.735076 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.774202 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.777888 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.819655 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.837940 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:42:51.629259116 +0000 UTC Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.858418 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.876925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.876980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.876998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.877021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.877036 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.883181 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.883216 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.883225 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.883305 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.883479 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:03 crc kubenswrapper[4769]: E0122 13:44:03.883561 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.901377 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.943113 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.976732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:03 crc kubenswrapper[4769]: I0122 13:44:03.979298 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:03Z","lastTransitionTime":"2026-01-22T13:44:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.015044 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.063254 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082689 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.082824 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.095953 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8" exitCode=0 Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.096027 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.097969 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.099642 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.143231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.179170 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.186293 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.222180 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.258429 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289046 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.289093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.301022 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.338277 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.373772 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.390968 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.415925 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.460151 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.494385 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.502669 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.535850 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.577154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.596167 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.621025 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.665203 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.698215 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.700964 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.742889 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:04Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.800518 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.838186 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:56:38.970668026 +0000 UTC Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:04 crc kubenswrapper[4769]: I0122 13:44:04.903210 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:04Z","lastTransitionTime":"2026-01-22T13:44:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.005575 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.104496 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e" exitCode=0 Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.104583 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.110868 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.111380 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.125312 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.137703 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.152761 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.166077 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.184354 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.201835 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.214678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.222617 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.261469 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.274718 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.289928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.308046 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.318951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.318993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.319010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.319032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.319049 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.322825 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.335844 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.358995 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.371948 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:05Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422319 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.422397 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.525887 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.629417 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733401 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733421 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.733435 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835624 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.835636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.839165 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 12:28:14.126013778 +0000 UTC Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.882589 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:05 crc kubenswrapper[4769]: E0122 13:44:05.882692 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.882829 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.882883 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:05 crc kubenswrapper[4769]: E0122 13:44:05.883084 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:05 crc kubenswrapper[4769]: E0122 13:44:05.883250 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:05 crc kubenswrapper[4769]: I0122 13:44:05.938508 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:05Z","lastTransitionTime":"2026-01-22T13:44:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041387 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.041471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.116686 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c" exitCode=0 Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.116742 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.139744 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143884 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.143895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.161390 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.177856 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.195519 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.216585 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.229833 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.244123 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248727 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248818 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.248831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.255231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.267334 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.286008 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.297405 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.313694 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.330156 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.347894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.351705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.361054 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:06Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.454969 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.558282 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.661990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.662009 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765727 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765879 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.765896 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.840241 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:50:28.167947119 +0000 UTC Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.868869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.868945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.868972 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.869039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.869063 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972095 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:06 crc kubenswrapper[4769]: I0122 13:44:06.972142 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:06Z","lastTransitionTime":"2026-01-22T13:44:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.075666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076063 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.076548 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.133895 4769 generic.go:334] "Generic (PLEG): container finished" podID="cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76" containerID="b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac" exitCode=0 Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.133937 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerDied","Data":"b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.165768 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.180337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.196852 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.219844 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.239040 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.253363 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.271188 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.282397 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.290427 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.305832 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.320859 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.335414 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.348977 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.360780 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.373905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.384598 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385644 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.385753 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.396511 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:07Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.488227 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.590536 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.650419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.650498 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.650530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650641 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650670 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.650633219 +0000 UTC m=+35.061743178 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650723 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.650708461 +0000 UTC m=+35.061818470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650736 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.650930 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.650889405 +0000 UTC m=+35.061999334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.693849 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.751217 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.751294 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751408 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751415 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751473 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751504 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751582 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.75155878 +0000 UTC m=+35.162668749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751426 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751633 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.751673 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.751660833 +0000 UTC m=+35.162770802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.796331 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.841280 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 14:07:03.693814485 +0000 UTC Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.882864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.882942 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.882865 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.883148 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.883054 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:07 crc kubenswrapper[4769]: E0122 13:44:07.883322 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:07 crc kubenswrapper[4769]: I0122 13:44:07.899667 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:07Z","lastTransitionTime":"2026-01-22T13:44:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.003547 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.106659 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.145362 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" event={"ID":"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76","Type":"ContainerStarted","Data":"f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.149968 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.150245 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.150421 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.150486 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.176456 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.177873 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.178278 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.189809 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.201134 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.209309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.217426 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.237433 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.260185 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.282160 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.294638 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.307041 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.312228 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.321983 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.333227 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.346388 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.357490 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.371476 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.386971 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.403565 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.414592 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.417685 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.433493 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.446350 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.467420 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.490760 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.515389 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518094 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.518112 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.530896 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.546878 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.568413 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.585019 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.606764 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.620856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.621404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.636423 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.666727 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.676667 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:08Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.723974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.724048 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826440 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.826496 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.842144 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:38:21.188186627 +0000 UTC Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930202 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:08 crc kubenswrapper[4769]: I0122 13:44:08.930261 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:08Z","lastTransitionTime":"2026-01-22T13:44:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.033704 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136214 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.136379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.238978 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.239043 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.341414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444359 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.444473 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.546637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.546959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.547068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.547153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.547345 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.649606 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.752184 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.842333 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:32:57.983804199 +0000 UTC Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.853955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.853989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.853997 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.854010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.854018 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.883224 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.883311 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:09 crc kubenswrapper[4769]: E0122 13:44:09.883398 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.883316 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:09 crc kubenswrapper[4769]: E0122 13:44:09.883510 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:09 crc kubenswrapper[4769]: E0122 13:44:09.883635 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:09 crc kubenswrapper[4769]: I0122 13:44:09.957690 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:09Z","lastTransitionTime":"2026-01-22T13:44:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.060497 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.159457 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/0.log" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162526 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.162570 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.163843 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b" exitCode=1 Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.163903 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.164680 4769 scope.go:117] "RemoveContainer" containerID="8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.188283 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.215310 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.235934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.247894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.264855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.271152 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.281827 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.294174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.307988 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.321201 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.340629 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.355330 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.366652 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.367782 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.386185 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.399238 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.413809 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.470931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.470979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.470992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.471010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.471024 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573635 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573667 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.573678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675962 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.675996 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.778631 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.842873 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:38:58.229576558 +0000 UTC Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.881194 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.901324 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.916360 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.931633 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.949595 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.974038 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:10 crc kubenswrapper[4769]: I0122 13:44:10.983565 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:10Z","lastTransitionTime":"2026-01-22T13:44:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.062223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.082666 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.085244 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.096905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.099295 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c028db8_99b9_422d_ba46_e1a2db06ce3c.slice/crio-21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a.scope\": RecentStats: unable to find data in memory cache]" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.114732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.126344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.140741 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.152855 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.166160 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.169053 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/0.log" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.171501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.171846 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.178855 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.187965 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.198236 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.216968 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.232263 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.247447 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.261056 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.271841 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.283287 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.290681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.297466 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.307684 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.320888 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.343233 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.372630 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.386650 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392852 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.392942 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.406028 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.425972 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.448691 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:11Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496404 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496425 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.496470 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.600475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.703188 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.806727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.844033 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 08:02:13.137322602 +0000 UTC Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.882724 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.882757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.882881 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.882738 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.883051 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:11 crc kubenswrapper[4769]: E0122 13:44:11.883243 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:11 crc kubenswrapper[4769]: I0122 13:44:11.908986 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:11Z","lastTransitionTime":"2026-01-22T13:44:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.011636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.114693 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.176394 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.177414 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/0.log" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.180399 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" exitCode=1 Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.180453 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.180499 4769 scope.go:117] "RemoveContainer" containerID="8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.181027 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.181215 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.201830 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.215459 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.220900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221106 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.221147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.239103 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.255343 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.270263 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.291384 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323519 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.323680 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8176020a9c6407ebbc5e5935aca998a9a8133090e712cea593113a338827293b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:10Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713207 6030 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713339 6030 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 13:44:09.713430 6030 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 13:44:09.713447 6030 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0122 13:44:09.713715 6030 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0122 13:44:09.713739 6030 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0122 13:44:09.713842 6030 factory.go:656] Stopping watch factory\\\\nI0122 13:44:09.713875 6030 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0122 13:44:09.713887 6030 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.362942 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.380002 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.395732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.408480 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.426951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427386 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.425920 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.427684 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.443538 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.457228 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.467415 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.530893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531022 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.531067 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.545874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.560356 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.565205 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.585315 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.590420 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.610619 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.614969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.615048 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.628306 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.631969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.632093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.649902 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:12Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:12 crc kubenswrapper[4769]: E0122 13:44:12.650019 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.651649 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.753943 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.845097 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:25:26.204155786 +0000 UTC Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856896 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.856940 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.964530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:12 crc kubenswrapper[4769]: I0122 13:44:12.965104 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:12Z","lastTransitionTime":"2026-01-22T13:44:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068378 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.068398 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171106 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.171191 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.187004 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.192420 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.192693 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.213851 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.229961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.246338 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.262305 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.274870 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.275746 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.290683 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.306453 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.317538 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.333058 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.350842 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.378158 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.378943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.379631 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.406294 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.422750 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.437355 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.450916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.482613 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.482837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.482918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.483001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.483078 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.498841 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf"] Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.499517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.502323 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.502603 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.519273 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.535238 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.550293 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.566680 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.584871 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586336 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.586385 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.607553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29c69aef-2c74-4731-8334-85c8c755be74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616189 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616285 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.616327 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m892q\" (UniqueName: \"kubernetes.io/projected/29c69aef-2c74-4731-8334-85c8c755be74-kube-api-access-m892q\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.628394 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.642090 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.654486 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.668813 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688799 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688826 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.688836 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.689223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.700721 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.713215 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.716975 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m892q\" (UniqueName: \"kubernetes.io/projected/29c69aef-2c74-4731-8334-85c8c755be74-kube-api-access-m892q\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.717037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29c69aef-2c74-4731-8334-85c8c755be74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.717141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.717202 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.718032 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.718279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29c69aef-2c74-4731-8334-85c8c755be74-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.726181 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29c69aef-2c74-4731-8334-85c8c755be74-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.734055 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.739410 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m892q\" (UniqueName: \"kubernetes.io/projected/29c69aef-2c74-4731-8334-85c8c755be74-kube-api-access-m892q\") pod \"ovnkube-control-plane-749d76644c-pwktf\" (UID: \"29c69aef-2c74-4731-8334-85c8c755be74\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.749669 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.760739 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:13Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.791113 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.817256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" Jan 22 13:44:13 crc kubenswrapper[4769]: W0122 13:44:13.831045 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c69aef_2c74_4731_8334_85c8c755be74.slice/crio-5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968 WatchSource:0}: Error finding container 5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968: Status 404 returned error can't find the container with id 5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968 Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.846045 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:45:32.05931606 +0000 UTC Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.883294 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.883438 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.883548 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.883682 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.884201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:13 crc kubenswrapper[4769]: E0122 13:44:13.884566 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897438 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:13 crc kubenswrapper[4769]: I0122 13:44:13.897478 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:13Z","lastTransitionTime":"2026-01-22T13:44:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.000816 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.103630 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.196164 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" event={"ID":"29c69aef-2c74-4731-8334-85c8c755be74","Type":"ContainerStarted","Data":"5edaad4124ada3aad16932af9fe04bc4918550c2d4ef151ac14a81e8d08a0968"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.206160 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.309708 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.411999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.412115 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.514628 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.615692 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cfh49"] Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.616493 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: E0122 13:44:14.616586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.617772 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.638433 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.657197 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.672358 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.694570 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.715217 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719842 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.719900 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.727294 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.727395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vshp2\" (UniqueName: \"kubernetes.io/projected/9764ff0b-ae92-470b-af85-7c8bb41642ba-kube-api-access-vshp2\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.750647 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.764426 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.781362 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.797236 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.816600 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.826751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.826913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.827642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.827669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.827682 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.828222 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.828377 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vshp2\" (UniqueName: \"kubernetes.io/projected/9764ff0b-ae92-470b-af85-7c8bb41642ba-kube-api-access-vshp2\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: E0122 13:44:14.828435 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:14 crc kubenswrapper[4769]: E0122 13:44:14.828567 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:15.328533507 +0000 UTC m=+34.739643486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.835409 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.847301 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:53:16.465001951 +0000 UTC Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.851911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.856407 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vshp2\" (UniqueName: \"kubernetes.io/projected/9764ff0b-ae92-470b-af85-7c8bb41642ba-kube-api-access-vshp2\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.876611 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.896467 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.916426 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.929892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:14Z","lastTransitionTime":"2026-01-22T13:44:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.930916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.941934 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.942766 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.952743 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.967253 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.978324 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:14 crc kubenswrapper[4769]: I0122 13:44:14.988994 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:14Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.004138 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.015155 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.027712 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.032304 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.048261 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.063291 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.082154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.096372 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.114860 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135208 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135225 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.135236 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.140769 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.157548 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.177324 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.195646 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.201963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" event={"ID":"29c69aef-2c74-4731-8334-85c8c755be74","Type":"ContainerStarted","Data":"10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.202030 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" event={"ID":"29c69aef-2c74-4731-8334-85c8c755be74","Type":"ContainerStarted","Data":"05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.212136 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.228725 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.237891 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.253048 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.272189 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.285742 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.299614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.316885 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.333012 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.333135 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.333182 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:16.333168195 +0000 UTC m=+35.744278134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.336370 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.340433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.352190 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.368037 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.418582 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.431934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443211 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.443270 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.447682 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.466637 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.480928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.502215 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.518014 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.532404 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:15Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.546598 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.649595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.738420 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.738597 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.738567157 +0000 UTC m=+51.149677126 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.738684 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.739079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739131 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739246 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739534 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.739429539 +0000 UTC m=+51.150539508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.739679 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.739549182 +0000 UTC m=+51.150659151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.752869 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.840200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.840281 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840498 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840520 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840534 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.840598 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.840582448 +0000 UTC m=+51.251692377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841042 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841114 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841138 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.841251 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:44:31.841219104 +0000 UTC m=+51.252329063 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.847778 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:35:06.690870987 +0000 UTC Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.856289 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.882952 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.883066 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.883148 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.883202 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.883358 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:15 crc kubenswrapper[4769]: E0122 13:44:15.883558 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:15 crc kubenswrapper[4769]: I0122 13:44:15.959280 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:15Z","lastTransitionTime":"2026-01-22T13:44:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.062256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165202 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165233 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.165250 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.268570 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.346873 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:16 crc kubenswrapper[4769]: E0122 13:44:16.347067 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:16 crc kubenswrapper[4769]: E0122 13:44:16.347482 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:18.347458065 +0000 UTC m=+37.758568004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370714 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.370898 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.473859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.474147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.577933 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.577994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.578011 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.578050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.578075 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.681228 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.784316 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.848335 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:13:44.060995542 +0000 UTC Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.883438 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:16 crc kubenswrapper[4769]: E0122 13:44:16.883674 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887092 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887118 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.887144 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.989885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:16 crc kubenswrapper[4769]: I0122 13:44:16.990754 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:16Z","lastTransitionTime":"2026-01-22T13:44:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.094294 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197631 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.197688 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.300154 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.403981 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506570 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.506762 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.610236 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.713328 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.815863 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.849496 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:46:09.288202081 +0000 UTC Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.883136 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.883198 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:17 crc kubenswrapper[4769]: E0122 13:44:17.883296 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.883313 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:17 crc kubenswrapper[4769]: E0122 13:44:17.883402 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:17 crc kubenswrapper[4769]: E0122 13:44:17.883468 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:17 crc kubenswrapper[4769]: I0122 13:44:17.919976 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:17Z","lastTransitionTime":"2026-01-22T13:44:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.023238 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127490 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.127544 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231404 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231575 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.231599 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.335895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.371286 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:18 crc kubenswrapper[4769]: E0122 13:44:18.371442 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:18 crc kubenswrapper[4769]: E0122 13:44:18.371730 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:22.371711139 +0000 UTC m=+41.782821078 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438477 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.438505 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.542153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.542876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.543058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.543260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.543414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646382 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.646425 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749424 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749526 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.749544 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.850029 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:27:42.450948691 +0000 UTC Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.851603 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.883289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:18 crc kubenswrapper[4769]: E0122 13:44:18.883586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:18 crc kubenswrapper[4769]: I0122 13:44:18.954103 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:18Z","lastTransitionTime":"2026-01-22T13:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.057981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.058001 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.160998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161043 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.161070 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264784 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.264844 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.367988 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.368000 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.471341 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.573813 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.676178 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.779128 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.850150 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:50:00.93928525 +0000 UTC Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882176 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882195 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:19 crc kubenswrapper[4769]: E0122 13:44:19.882291 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:19 crc kubenswrapper[4769]: E0122 13:44:19.882422 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.882530 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:19 crc kubenswrapper[4769]: E0122 13:44:19.882636 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:19 crc kubenswrapper[4769]: I0122 13:44:19.984747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:19Z","lastTransitionTime":"2026-01-22T13:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087147 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.087190 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.189922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.190076 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292601 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.292618 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.396421 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.499674 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602411 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.602473 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.705884 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.807974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808045 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.808055 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.850688 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:17:34.187882344 +0000 UTC Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.883193 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:20 crc kubenswrapper[4769]: E0122 13:44:20.883416 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.911960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.912086 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:20Z","lastTransitionTime":"2026-01-22T13:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.915577 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.934440 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.951049 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.966680 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:20 crc kubenswrapper[4769]: I0122 13:44:20.984132 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:20Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014426 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.014602 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.036145 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.050592 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.069109 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.089533 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.106235 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.118673 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.128309 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.147427 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.160683 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.180257 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.195007 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.208133 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:21Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.221983 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325096 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.325257 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.427960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.428740 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.531981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.532124 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635121 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.635159 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738517 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.738529 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.841926 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.851189 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 19:30:43.072248712 +0000 UTC Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.882877 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.882933 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:21 crc kubenswrapper[4769]: E0122 13:44:21.883035 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.882896 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:21 crc kubenswrapper[4769]: E0122 13:44:21.883259 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:21 crc kubenswrapper[4769]: E0122 13:44:21.883335 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:21 crc kubenswrapper[4769]: I0122 13:44:21.945392 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:21Z","lastTransitionTime":"2026-01-22T13:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.048279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.151897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.151985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.152017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.152047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.152067 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255115 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.255257 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.357996 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.416973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.417299 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.417446 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:30.417417125 +0000 UTC m=+49.828527094 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.460253 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562601 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562639 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.562655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653910 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653968 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.653995 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.667566 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.673608 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.691318 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.696206 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.723227 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729542 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.729623 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.748210 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754111 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754142 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.754156 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.769408 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:22Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.769538 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771163 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.771203 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.852236 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:36:49.922168606 +0000 UTC Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873924 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.873938 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.883250 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:22 crc kubenswrapper[4769]: E0122 13:44:22.883363 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:22 crc kubenswrapper[4769]: I0122 13:44:22.977283 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:22Z","lastTransitionTime":"2026-01-22T13:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.080983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.081001 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.183471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.286269 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388842 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.388896 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.491956 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.594898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.594959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.594977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.595004 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.595021 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.699661 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.802363 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.852994 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:21:30.736238562 +0000 UTC Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.882549 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:23 crc kubenswrapper[4769]: E0122 13:44:23.882706 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.882549 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.882869 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:23 crc kubenswrapper[4769]: E0122 13:44:23.883023 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:23 crc kubenswrapper[4769]: E0122 13:44:23.883216 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:23 crc kubenswrapper[4769]: I0122 13:44:23.905371 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:23Z","lastTransitionTime":"2026-01-22T13:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008022 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.008138 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.111330 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214163 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.214209 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.316257 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.418941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.521996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522095 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.522157 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.625874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728655 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.728708 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831378 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.831414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.853646 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:32:58.444640367 +0000 UTC Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.883424 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:24 crc kubenswrapper[4769]: E0122 13:44:24.883628 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.934864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.935340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.935591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.935866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:24 crc kubenswrapper[4769]: I0122 13:44:24.936105 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:24Z","lastTransitionTime":"2026-01-22T13:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.038619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.140853 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.243776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.244384 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.347920 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.348129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.450970 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.554354 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.656964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657037 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.657049 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.759898 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.853953 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:06:15.353581727 +0000 UTC Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.861942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.861976 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.861987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.862003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.862015 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.882429 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:25 crc kubenswrapper[4769]: E0122 13:44:25.882553 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.882438 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.882975 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:25 crc kubenswrapper[4769]: E0122 13:44:25.883116 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.883342 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:25 crc kubenswrapper[4769]: E0122 13:44:25.883422 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.965897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.965957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.965979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.966011 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:25 crc kubenswrapper[4769]: I0122 13:44:25.966033 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:25Z","lastTransitionTime":"2026-01-22T13:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068440 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.068554 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.170574 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.245340 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.247546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.248088 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.268558 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.272198 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.281732 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.297566 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.315249 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.337148 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.358800 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.374523 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.379315 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.417361 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.440403 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.457455 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.470065 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.477201 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.480200 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.491338 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.500674 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.512726 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.523230 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.532350 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:26Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.579692 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681826 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.681856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.785176 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.854348 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:08:27.953493574 +0000 UTC Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.883873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:26 crc kubenswrapper[4769]: E0122 13:44:26.884490 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.888366 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990928 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.990988 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:26 crc kubenswrapper[4769]: I0122 13:44:26.991005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:26Z","lastTransitionTime":"2026-01-22T13:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094428 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.094496 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.197546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.254524 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.255837 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/1.log" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.261001 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" exitCode=1 Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.261078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.261149 4769 scope.go:117] "RemoveContainer" containerID="21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.262265 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.262615 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.284693 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300376 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300408 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.300422 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.303047 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.314479 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.325866 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.339720 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.353839 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.372701 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://21a6f61ed512e5cacca4b895a2de4369e69b116f0a55236b623ab8f3bb9a938a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:11Z\\\",\\\"message\\\":\\\" 6154 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:10Z is after 2025-08-24T17:21:41Z]\\\\nI0122 13:44:11.061703 6154 services_controller.go:434] Service openshift-operator-lifecycle-manager/packageserver-service retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{packageserver-service openshift-operator-lifecycle-manager a60a1f74-c6ff-4c81-96ae-27ba9796ba61 5485 0 2025-02-23 05:23:24 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[olm.managed:true] map[] [{operators.coreos.com/v1alpha1 ClusterServiceVersion packageserver bbc08db6-5ba4-4fc4-b49d-26331e1e728b 0xc007b5cb4d 0xc007b5cb4e}] [] []},Spec:ServiceSpec{Ports:[]ServicePo\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.403575 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.441991 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.457344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.472027 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.486087 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.505896 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.510711 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.523145 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.541127 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.553735 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.573243 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.584591 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:27Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.608129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.711341 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.813974 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.854996 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:27:48.10866075 +0000 UTC Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.882776 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.882832 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.882986 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.883109 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.883252 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:27 crc kubenswrapper[4769]: E0122 13:44:27.883361 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:27 crc kubenswrapper[4769]: I0122 13:44:27.916603 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:27Z","lastTransitionTime":"2026-01-22T13:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.019727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.123424 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227139 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.227162 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.268540 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.273745 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:28 crc kubenswrapper[4769]: E0122 13:44:28.274085 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.293916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.312746 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.329917 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330419 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.330550 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.352327 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.367189 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.384462 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.404208 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.428006 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.434402 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.453288 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.472784 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.493304 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.511165 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.531302 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536834 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.536852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.549139 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.565087 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.581595 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.595389 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:28Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.640287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743854 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.743905 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846676 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.846744 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.855259 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:34:56.286933476 +0000 UTC Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.882971 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:28 crc kubenswrapper[4769]: E0122 13:44:28.883158 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:28 crc kubenswrapper[4769]: I0122 13:44:28.950611 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:28Z","lastTransitionTime":"2026-01-22T13:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054135 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.054268 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.158817 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.261625 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.363908 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.467546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.570657 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.673986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.674072 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777359 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.777415 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.855371 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:04:24.640498606 +0000 UTC Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.880705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.883227 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.883227 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:29 crc kubenswrapper[4769]: E0122 13:44:29.883407 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.883252 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:29 crc kubenswrapper[4769]: E0122 13:44:29.883483 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:29 crc kubenswrapper[4769]: E0122 13:44:29.883589 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.983930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984037 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:29 crc kubenswrapper[4769]: I0122 13:44:29.984167 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:29Z","lastTransitionTime":"2026-01-22T13:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.088379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.191840 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294752 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.294832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397428 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.397501 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.499937 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.506971 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:30 crc kubenswrapper[4769]: E0122 13:44:30.507227 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:30 crc kubenswrapper[4769]: E0122 13:44:30.507328 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:44:46.507298115 +0000 UTC m=+65.918408084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.602925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.602981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.602999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.603027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.603043 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706917 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706949 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.706973 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.810945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.855865 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:11:20.958829379 +0000 UTC Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.883309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:30 crc kubenswrapper[4769]: E0122 13:44:30.883861 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.901352 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.914668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.915458 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:30Z","lastTransitionTime":"2026-01-22T13:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.920638 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.935485 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.952157 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.967674 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:30 crc kubenswrapper[4769]: I0122 13:44:30.991944 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:30Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018426 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.018916 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.021853 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.037225 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.051458 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.065621 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.082289 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.100931 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.116179 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.121256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.133164 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.147098 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.159678 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.173597 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:31Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.223235 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325690 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.325717 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.428576 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.532302 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.634947 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.634992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.635003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.635019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.635029 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.737757 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.819732 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.819945 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820022 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.819987653 +0000 UTC m=+83.231097632 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820088 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.820144 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820168 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.820146157 +0000 UTC m=+83.231256126 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820442 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.820591 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.820562888 +0000 UTC m=+83.231672927 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.840995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.841119 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.857764 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:33:00.423666844 +0000 UTC Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.883148 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.883321 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.883440 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.883515 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.883742 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.884049 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.921328 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.921446 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921508 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921542 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921559 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921623 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.921601623 +0000 UTC m=+83.332711582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921647 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921678 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921702 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: E0122 13:44:31.921778 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:03.921744277 +0000 UTC m=+83.332854246 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944887 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:31 crc kubenswrapper[4769]: I0122 13:44:31.944944 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:31Z","lastTransitionTime":"2026-01-22T13:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.047324 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.150615 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254139 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.254169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356425 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.356433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.460209 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562850 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.562916 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664652 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664741 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.664787 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.767619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.858034 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:59:40.008294427 +0000 UTC Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870254 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.870392 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.882491 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:32 crc kubenswrapper[4769]: E0122 13:44:32.882696 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:32 crc kubenswrapper[4769]: I0122 13:44:32.973900 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:32Z","lastTransitionTime":"2026-01-22T13:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.000583 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.021147 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.025336 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.041945 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.047716 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.069150 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073552 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.073614 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.097507 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.102995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.103067 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.120176 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:33Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.120557 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.122464 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.225309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327697 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.327814 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.430405 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.533948 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.637185 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739881 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.739922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842626 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.842689 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.858365 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:33:08.90780158 +0000 UTC Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.882751 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.882868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.882753 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.882941 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.883104 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:33 crc kubenswrapper[4769]: E0122 13:44:33.883433 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:33 crc kubenswrapper[4769]: I0122 13:44:33.946571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:33Z","lastTransitionTime":"2026-01-22T13:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.049224 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.151995 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.255767 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.359210 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.462287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.565971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.566005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669396 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.669440 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.772344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.856364 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.859044 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:27:44.644797544 +0000 UTC Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.867096 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.873220 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.874446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.874533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.874614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.875203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.875294 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.883205 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:34 crc kubenswrapper[4769]: E0122 13:44:34.883478 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.895564 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.913961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.929069 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.943711 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.967174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978449 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.978487 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:34Z","lastTransitionTime":"2026-01-22T13:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:34 crc kubenswrapper[4769]: I0122 13:44:34.984377 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.000665 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:34Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.013552 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.033161 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.051561 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.067196 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.082423 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083448 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.083472 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.100197 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.113751 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.130031 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.147553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:35Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187737 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.187760 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.290502 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.393585 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.496395 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600714 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.600733 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.704279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.807912 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.859742 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:24:42.354756485 +0000 UTC Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.883064 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:35 crc kubenswrapper[4769]: E0122 13:44:35.883273 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.883088 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:35 crc kubenswrapper[4769]: E0122 13:44:35.883390 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.883064 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:35 crc kubenswrapper[4769]: E0122 13:44:35.883446 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:35 crc kubenswrapper[4769]: I0122 13:44:35.910775 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:35Z","lastTransitionTime":"2026-01-22T13:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013927 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.013940 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117135 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.117146 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.219984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.220026 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323437 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.323455 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.427359 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.529855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.529983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.530012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.530040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.530061 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633087 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.633255 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.737642 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.840900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.841089 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.860248 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:11:48.866439463 +0000 UTC Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.883380 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:36 crc kubenswrapper[4769]: E0122 13:44:36.883561 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944773 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:36 crc kubenswrapper[4769]: I0122 13:44:36.944856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:36Z","lastTransitionTime":"2026-01-22T13:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.047996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.048123 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.150907 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.253951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.254058 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.356323 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.459118 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.561904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.561970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.561992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.562021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.562043 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.665409 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.769203 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.861140 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:15:17.687543272 +0000 UTC Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.872215 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.882571 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.882616 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:37 crc kubenswrapper[4769]: E0122 13:44:37.882672 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.882714 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:37 crc kubenswrapper[4769]: E0122 13:44:37.882879 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:37 crc kubenswrapper[4769]: E0122 13:44:37.882985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:37 crc kubenswrapper[4769]: I0122 13:44:37.974954 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:37Z","lastTransitionTime":"2026-01-22T13:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.077683 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180114 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.180131 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.282753 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.385907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.385975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.385999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.386026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.386045 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489458 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.489513 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593542 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.593611 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696828 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.696874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799640 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.799837 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.861610 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:15:08.532431295 +0000 UTC Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.882938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:38 crc kubenswrapper[4769]: E0122 13:44:38.883143 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:38 crc kubenswrapper[4769]: I0122 13:44:38.902895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:38Z","lastTransitionTime":"2026-01-22T13:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006118 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.006137 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.109779 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214244 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214267 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.214284 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317601 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317625 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.317658 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421121 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.421185 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524014 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.524144 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.627565 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730665 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.730855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835418 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.835463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.861766 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 00:59:49.614863995 +0000 UTC Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.882330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.882428 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.882563 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:39 crc kubenswrapper[4769]: E0122 13:44:39.882756 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:39 crc kubenswrapper[4769]: E0122 13:44:39.882964 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:39 crc kubenswrapper[4769]: E0122 13:44:39.883156 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:39 crc kubenswrapper[4769]: I0122 13:44:39.938530 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:39Z","lastTransitionTime":"2026-01-22T13:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.041130 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.144655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247927 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.247968 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350862 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.350945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455181 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.455256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558924 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.558981 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662409 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.662477 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.765839 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.862120 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:41:12.421339661 +0000 UTC Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.869964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.870148 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.882758 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:40 crc kubenswrapper[4769]: E0122 13:44:40.883028 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.906307 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.919731 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.941176 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.956133 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.968254 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.973604 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:40Z","lastTransitionTime":"2026-01-22T13:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.980779 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:40 crc kubenswrapper[4769]: I0122 13:44:40.999728 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:40Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.011847 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.024984 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.037354 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.053206 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.069527 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.077535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.077730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.077886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.078003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.078108 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.081091 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.090865 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.101452 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.112070 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.126296 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.139262 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:41Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180515 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.180715 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.283659 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386408 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.386470 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.522703 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.625607 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.728334 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.831402 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.863095 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:02:08.861932752 +0000 UTC Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.882581 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.882642 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.882661 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.882968 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.883100 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.883261 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.884445 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:41 crc kubenswrapper[4769]: E0122 13:44:41.884752 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:41 crc kubenswrapper[4769]: I0122 13:44:41.934972 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:41Z","lastTransitionTime":"2026-01-22T13:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.037571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.140147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.242433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344699 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.344854 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.447378 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.550561 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.653282 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756401 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.756523 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859087 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.859100 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.864236 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 04:06:05.309891746 +0000 UTC Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.882822 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:42 crc kubenswrapper[4769]: E0122 13:44:42.882982 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:42 crc kubenswrapper[4769]: I0122 13:44:42.961281 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:42Z","lastTransitionTime":"2026-01-22T13:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.063983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.064005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.166636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268933 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.268989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.269012 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.370856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.385207 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.399742 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.404649 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.425766 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.430445 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.448695 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.455862 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.470776 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.474379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.487577 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:43Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.487687 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489303 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.489312 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.591930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.591979 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.591989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.592004 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.592015 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.694950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.694987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.694998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.695012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.695022 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.797845 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.864931 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:30:56.535976803 +0000 UTC Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.882238 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.882330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.882384 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.882474 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.882682 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:43 crc kubenswrapper[4769]: E0122 13:44:43.882889 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:43 crc kubenswrapper[4769]: I0122 13:44:43.900655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:43Z","lastTransitionTime":"2026-01-22T13:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.003750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004386 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.004500 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.106678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.209361 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.312189 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414723 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.414747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517519 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.517951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.518036 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.622881 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725604 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.725646 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.828424 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.865905 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:42:37.072493181 +0000 UTC Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.883953 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:44 crc kubenswrapper[4769]: E0122 13:44:44.884132 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:44 crc kubenswrapper[4769]: I0122 13:44:44.931873 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:44Z","lastTransitionTime":"2026-01-22T13:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034431 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.034442 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136155 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.136247 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.238744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.239211 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.344576 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.447777 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.556360 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.658891 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.761425 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.863967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.864013 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.866195 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:06:00.218959182 +0000 UTC Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.882628 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.882651 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.882631 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:45 crc kubenswrapper[4769]: E0122 13:44:45.882759 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:45 crc kubenswrapper[4769]: E0122 13:44:45.882848 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:45 crc kubenswrapper[4769]: E0122 13:44:45.882902 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:45 crc kubenswrapper[4769]: I0122 13:44:45.966369 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:45Z","lastTransitionTime":"2026-01-22T13:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.068591 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.171206 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.273964 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375884 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.375913 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478295 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.478324 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.579866 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:46 crc kubenswrapper[4769]: E0122 13:44:46.580066 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:46 crc kubenswrapper[4769]: E0122 13:44:46.580144 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:45:18.580123383 +0000 UTC m=+97.991233312 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.581405 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.684475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.786880 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.866282 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:28:11.973655071 +0000 UTC Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.882874 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:46 crc kubenswrapper[4769]: E0122 13:44:46.883034 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.888302 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:46 crc kubenswrapper[4769]: I0122 13:44:46.990256 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:46Z","lastTransitionTime":"2026-01-22T13:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.092614 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.194726 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.297945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.352301 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/0.log" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.352489 4769 generic.go:334] "Generic (PLEG): container finished" podID="d4186e93-df8a-49d3-9068-c8b8acd05baa" containerID="f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122" exitCode=1 Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.352540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerDied","Data":"f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.353311 4769 scope.go:117] "RemoveContainer" containerID="f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.370382 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.381209 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.394646 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.400301 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.406007 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.417096 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.426257 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.438969 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.448451 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.458966 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.469373 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.478131 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.489037 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.502703 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.502973 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.503076 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.517807 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.534451 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.552015 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.563345 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.590273 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:47Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.606219 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.708132 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.812383 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.867428 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 04:54:46.58243414 +0000 UTC Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.883056 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.883086 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:47 crc kubenswrapper[4769]: E0122 13:44:47.883203 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.883222 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:47 crc kubenswrapper[4769]: E0122 13:44:47.883333 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:47 crc kubenswrapper[4769]: E0122 13:44:47.883393 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:47 crc kubenswrapper[4769]: I0122 13:44:47.914705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:47Z","lastTransitionTime":"2026-01-22T13:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017477 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.017499 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119640 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.119669 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.222831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.324843 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.357566 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/0.log" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.357621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.367820 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.379333 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.389226 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.402949 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.414199 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.426869 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.427998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428045 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.428072 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.446140 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.470601 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.480839 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.499624 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.512175 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.524358 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529607 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.529632 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.535208 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.548603 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.560303 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.572427 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.582279 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.593210 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:48Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632211 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.632251 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735635 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.735666 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.837699 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.869173 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 02:43:22.507993165 +0000 UTC Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.882819 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:48 crc kubenswrapper[4769]: E0122 13:44:48.883063 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939584 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:48 crc kubenswrapper[4769]: I0122 13:44:48.939593 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:48Z","lastTransitionTime":"2026-01-22T13:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.041550 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.143979 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.246978 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349374 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349416 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.349463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.452259 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.554492 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.656819 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.759169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861162 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.861186 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.869712 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:04:39.535223809 +0000 UTC Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.883142 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.883164 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:49 crc kubenswrapper[4769]: E0122 13:44:49.883297 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.883343 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:49 crc kubenswrapper[4769]: E0122 13:44:49.883468 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:49 crc kubenswrapper[4769]: E0122 13:44:49.883520 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963264 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:49 crc kubenswrapper[4769]: I0122 13:44:49.963367 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:49Z","lastTransitionTime":"2026-01-22T13:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.065522 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.167826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270829 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.270873 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.372290 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.474942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.474987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.474998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.475017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.475028 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.577981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.578125 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.680592 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784181 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.784202 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.870581 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 07:23:07.000481195 +0000 UTC Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.884119 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:50 crc kubenswrapper[4769]: E0122 13:44:50.884251 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.886753 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.901890 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.913494 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.929676 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.942444 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.957720 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.976118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.986842 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.988585 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:50Z","lastTransitionTime":"2026-01-22T13:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:50 crc kubenswrapper[4769]: I0122 13:44:50.998610 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:50Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.012450 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.025142 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.035456 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.044139 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.054244 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.063327 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.076080 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.089213 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090284 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.090310 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.101086 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.111409 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:51Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.193291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294854 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.294889 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397784 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.397834 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.500489 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.602915 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705741 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.705919 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.808200 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.870844 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:38:02.265395619 +0000 UTC Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.883090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.883127 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.883183 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:51 crc kubenswrapper[4769]: E0122 13:44:51.883222 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:51 crc kubenswrapper[4769]: E0122 13:44:51.883379 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:51 crc kubenswrapper[4769]: E0122 13:44:51.883498 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910895 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:51 crc kubenswrapper[4769]: I0122 13:44:51.910971 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:51Z","lastTransitionTime":"2026-01-22T13:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.013404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.116547 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218510 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.218539 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.321777 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.424588 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.527245 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.629291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.733486 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.836489 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.870944 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 16:24:51.876862183 +0000 UTC Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.882330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:52 crc kubenswrapper[4769]: E0122 13:44:52.882461 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938634 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:52 crc kubenswrapper[4769]: I0122 13:44:52.938643 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:52Z","lastTransitionTime":"2026-01-22T13:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.040852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143828 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.143854 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.246491 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349374 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.349468 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453758 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.453853 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.556629 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.584404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.599281 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.604356 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.619848 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.623922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.635215 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639963 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.639983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.640003 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.656327 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660413 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.660429 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.672079 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:53Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.672296 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674359 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.674457 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777034 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.777129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.871420 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:07:18.887202669 +0000 UTC Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.879284 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.882392 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.882421 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.882486 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.882643 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.882737 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:53 crc kubenswrapper[4769]: E0122 13:44:53.882875 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:53 crc kubenswrapper[4769]: I0122 13:44:53.982768 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:53Z","lastTransitionTime":"2026-01-22T13:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.084922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.084978 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.084994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.085015 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.085031 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187114 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.187144 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.289818 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.391766 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.494928 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.494982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.494997 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.495017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.495035 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.597150 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.700748 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803502 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.803538 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.872237 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:41:31.469833339 +0000 UTC Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.882737 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:54 crc kubenswrapper[4769]: E0122 13:44:54.882983 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.905938 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.905987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.906003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.906026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:54 crc kubenswrapper[4769]: I0122 13:44:54.906044 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:54Z","lastTransitionTime":"2026-01-22T13:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.008759 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111813 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.111830 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.215321 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318976 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.318998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.319011 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.421906 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524814 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524830 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.524842 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.627703 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.730312 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832569 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.832681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.872932 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:29:18.638705998 +0000 UTC Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.882599 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.882630 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.882713 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:55 crc kubenswrapper[4769]: E0122 13:44:55.882889 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:55 crc kubenswrapper[4769]: E0122 13:44:55.883694 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:55 crc kubenswrapper[4769]: E0122 13:44:55.883771 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.884245 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:55 crc kubenswrapper[4769]: I0122 13:44:55.936388 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:55Z","lastTransitionTime":"2026-01-22T13:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.039590 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.141901 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.243960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.244073 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346603 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346613 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.346641 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.389752 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.392424 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.393280 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.417488 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.445196 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449155 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.449171 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.461668 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.484893 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.497270 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.510932 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.522933 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.534709 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.545867 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.551955 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.556972 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.568464 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.591219 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.604552 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.618040 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.631764 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.648834 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653546 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.653554 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.664665 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.674887 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:56Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.755619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858248 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.858344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.873676 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:30:11.980090113 +0000 UTC Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.883088 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:56 crc kubenswrapper[4769]: E0122 13:44:56.883237 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:56 crc kubenswrapper[4769]: I0122 13:44:56.961278 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:56Z","lastTransitionTime":"2026-01-22T13:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.063901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.063962 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.063982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.064013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.064039 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167125 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.167141 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269976 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.269992 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.372349 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.396409 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.397030 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/2.log" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.399629 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" exitCode=1 Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.399662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.399694 4769 scope.go:117] "RemoveContainer" containerID="c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.400649 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.401098 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.413118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.422961 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.437399 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.446337 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.455758 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.466386 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475014 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475085 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.475096 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.477838 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.492726 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.509520 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a2c8b17afe59bd8ef3c5908ea0b3175ae6612f24331a32bc0626daa47d5d14\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:27Z\\\",\\\"message\\\":\\\"vent on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0122 13:44:26.875689 6349 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875703 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI0122 13:44:26.875715 6349 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI0122 13:44:26.875722 6349 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI0122 13:44:26.875727 6349 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI0122 13:44:26.875737 6349 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI0122 13:44:26.875745 6349 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF0122 13:44:26.875769 6349 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:57Z\\\",\\\"message\\\":\\\"rvice openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 13:44:56.848220 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0122 13:44:56.848230 6704 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.519592 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.535457 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.545749 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.558735 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.566864 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.577904 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.579702 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.591242 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.601017 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.609913 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:57Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680318 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.680332 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.783723 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.874213 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:09:48.623975891 +0000 UTC Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.882548 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.882587 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.882711 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.882765 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.882959 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:57 crc kubenswrapper[4769]: E0122 13:44:57.883015 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.887553 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:57 crc kubenswrapper[4769]: I0122 13:44:57.990741 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:57Z","lastTransitionTime":"2026-01-22T13:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.093535 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.196945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.299433 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.401939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402028 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.402080 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.406050 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.410099 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:44:58 crc kubenswrapper[4769]: E0122 13:44:58.410264 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.424363 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.438532 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.456229 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.474876 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.491090 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.502938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505089 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505115 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.505168 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.514617 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.526911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.536098 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.548101 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.559614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.572563 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.589543 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608089 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.608130 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.615154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:57Z\\\",\\\"message\\\":\\\"rvice openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 13:44:56.848220 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0122 13:44:56.848230 6704 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.630613 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.656004 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.670534 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.687321 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:44:58Z is after 2025-08-24T17:21:41Z" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710338 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710363 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.710372 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.813291 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.875216 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:46:18.272885062 +0000 UTC Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.882963 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:44:58 crc kubenswrapper[4769]: E0122 13:44:58.883149 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:58 crc kubenswrapper[4769]: I0122 13:44:58.915894 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:58Z","lastTransitionTime":"2026-01-22T13:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.018861 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.121928 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.224848 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.327951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.328081 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.431303 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.534769 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.638191 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.741521 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.844313 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.875821 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:45:19.652817626 +0000 UTC Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.883246 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.883370 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.883599 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:44:59 crc kubenswrapper[4769]: E0122 13:44:59.883727 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:44:59 crc kubenswrapper[4769]: E0122 13:44:59.883899 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:44:59 crc kubenswrapper[4769]: E0122 13:44:59.883987 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.898028 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:44:59 crc kubenswrapper[4769]: I0122 13:44:59.947539 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:44:59Z","lastTransitionTime":"2026-01-22T13:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.050947 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154644 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.154654 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.256767 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360278 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.360337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462626 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.462666 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.564832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666612 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.666707 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.769271 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.871599 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.876696 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:43:04.35947726 +0000 UTC Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.883090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:00 crc kubenswrapper[4769]: E0122 13:45:00.883225 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.905112 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b962200fa0a2d9c500125f56e96656fb0feb5d500a768d148cf9ca2a0569f970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f407182e277e55ccfc4bdc9fdd0832e0eab3cad2df3b32a66189e599dd303f86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.927479 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b546a5839399c4434cd7427e59be44414fd81cd31a3b8736bbc23d9c03ab5fd9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.945285 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3ee5efc-8b71-4691-8f78-ff11abb2d770\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0315382a0b43a2b3069391b3c63464c38b94daf1baf2700f5001abca332fc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8e31b29c1c4da39b2854e1750a906e380a822c602e2b7a24158ee582ba95627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8e31b29c1c4da39b2854e1750a906e380a822c602e2b7a24158ee582ba95627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.965300 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4d5e43a9-5dd9-470e-a3e1-65be2c0003c4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"le observer\\\\nW0122 13:43:58.922221 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0122 13:43:58.922428 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 13:43:58.923819 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3279922669/tls.crt::/tmp/serving-cert-3279922669/tls.key\\\\\\\"\\\\nI0122 13:43:59.114141 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 13:43:59.115832 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 13:43:59.115850 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 13:43:59.115871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 13:43:59.115876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 13:43:59.120589 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 13:43:59.120619 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120629 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 13:43:59.120643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 13:43:59.120651 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 13:43:59.120656 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 13:43:59.120662 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 13:43:59.120676 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 13:43:59.123476 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974436 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974487 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974501 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.974536 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:00Z","lastTransitionTime":"2026-01-22T13:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.980235 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:00 crc kubenswrapper[4769]: I0122 13:45:00.994431 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0af8746-c9f0-48e6-8a60-02fed286b419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4be9c299672c1ced5d45263cb73ea0d7766a3cd47bbb16996c91c24206203ffe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bhgc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-hwhw7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:00Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.012723 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bqn6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"16fc232a-07ad-4611-8612-7b1c3f784c14\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55ec909e6b2784f7c1e2fca82827a0c42109ef1c08e0c2ae484574c8f2c6460f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pwhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bqn6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.027273 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29c69aef-2c74-4731-8334-85c8c755be74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://05de7d7a90042aebcc3f9c3ecd82febecef6e209d3c12dfe22a55b0a2960afdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10390dacc9fe0452c4b8e2f3b43ffa16abdb260918a2cea271e546875c22cd84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m892q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pwktf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.041304 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cfh49" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9764ff0b-ae92-470b-af85-7c8bb41642ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vshp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cfh49\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.055161 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.069120 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x582x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34fa095e-fc7f-431c-8421-1178e63721ac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c630331d1e07a9bd283dfb95e9324ecc7666491cb2a4383ba82f45bbcaee3ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2c8w6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x582x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076758 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.076848 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.084081 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fclh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4186e93-df8a-49d3-9068-c8b8acd05baa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:46Z\\\",\\\"message\\\":\\\"2026-01-22T13:44:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8\\\\n2026-01-22T13:44:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3c45a748-1fff-4c37-bf93-cbbef666b3f8 to /host/opt/cni/bin/\\\\n2026-01-22T13:44:01Z [verbose] multus-daemon started\\\\n2026-01-22T13:44:01Z [verbose] Readiness Indicator file check\\\\n2026-01-22T13:44:46Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk8w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fclh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.099686 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://340e014090eff13efd7d84a279f298e5d933733828964a3926f5ce780010c692\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.116704 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:59Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.133352 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd0cf7bc-a4fc-4a12-aafc-28598fdd5d76\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f6f2c33f3aa2c24eb46a672d0d23d034a7833eb85c8d4c7313b04fd659d7db54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9a8ab5a8012fd06ec2c2e46a7217d6708631d44a24a09103a7556b5a69e77a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a32c7fa5559c86294f8f08171072aefe337f800bb78c15c6dffae54ca5faabe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6ff35c69cda850ce58b00acdc9922824b9c166d8e8982ac96c7f83881bb6fd8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e463779cc1d83fdf6748ef8c6df8b4fb82f65877db2a18a1223fab778f5626e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8871e0d928d10c9c85abaa214ad97c94905e61d175f7fc42cae9794c14a7046c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9fe08f6e44e118dfcdfdb3c7c74105e6b2ead8837eb6cfb9d826870d66c89ac\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hprv8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d9wdl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.162666 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c028db8-99b9-422d-ba46-e1a2db06ce3c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T13:44:57Z\\\",\\\"message\\\":\\\"rvice openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0122 13:44:56.848220 6704 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nI0122 13:44:56.848230 6704 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"e4e4203e-87c7-4024-930a-5d6bdfe2bdde\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-api/machine-api-operator-webhook_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:fals\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:44:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:44:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:44:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p276w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:44:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jrg8z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.177965 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e17d6c01-6246-4f19-b9a9-e3931ac380fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7cd7e89ca0bee05fa5b6d5a5ca1d303af1299572c4480fb92a515acaa792d6c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb0506cd1a0b9519c03150969442ddf7bfe4621fed24943b71fed8eb2d9788f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2655c9a58f6e63f5a53485b0bf1a679818c12a7988705232c65930e5f421eb9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d250061efa0ea6e9a6e20599aef055162d62e1c901353b8eac8b3568dff86166\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.179545 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.196323 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8acbc9bd-4838-4547-80a1-a1e16c37bd1a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:44:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5acd2ae7d3fbf9440fb0095eefddc52c84728ec6d0e4d9821229cb6aa12573d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5c6216f99e720568a7dd7656244ade4b4fd44412e714b6135242a19f78761f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d261d3f5d679cb5cf8f94fe77bb969fd91ca614da19b149ce6eb02a380114d79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://752fd19e1b4a5fd4026faf94d00dfd9322825c19436cccb582e074f4c203bd9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04ae80bb757e0b6ea9a3759ec49af25cff9106966608aa9750a0026e6053d15f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bdd3502b371640a94853d1315dd11aefcb563149db1c61ed1afd15c4434c2c59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53f77f30fd195dd06e627a1c6a7d9c0c79aa74bb7b99144887efc4c210a66f43\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61f51606519d29e8458a5340472d50a2f0482ff092341e1ffb7bb7c42d551440\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T13:43:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T13:43:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.208825 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b2dae2b-fadf-49f8-8ff4-4772ad128dca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T13:43:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45689b9787ddf9d1ee0769e761b7b24a3804e1a92e4bc522b5fc4ef4dc145ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ad0f1c98d4f80ff4fb3c0a0b65518ea6fdf45f4e428661eb8c34419d0774da8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c0a1825528a237f05c91fcea30668c1b12f4b58a604912844e672424907cc0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T13:43:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T13:43:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:01Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.281740 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.385322 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.487915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.488344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590544 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.590566 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.693385 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795426 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795777 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795872 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.795947 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.877095 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:20:01.499143232 +0000 UTC Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.882418 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.882497 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:01 crc kubenswrapper[4769]: E0122 13:45:01.882543 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.882636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:01 crc kubenswrapper[4769]: E0122 13:45:01.882759 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:01 crc kubenswrapper[4769]: E0122 13:45:01.883091 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:01 crc kubenswrapper[4769]: I0122 13:45:01.898584 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:01Z","lastTransitionTime":"2026-01-22T13:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.001247 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.104522 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.207634 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310235 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.310245 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.413337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515231 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.515280 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.617964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618342 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.618762 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.721942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.722906 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825719 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.825811 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.878263 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:42:32.403972393 +0000 UTC Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.882616 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:02 crc kubenswrapper[4769]: E0122 13:45:02.882825 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:02 crc kubenswrapper[4769]: I0122 13:45:02.927535 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:02Z","lastTransitionTime":"2026-01-22T13:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.029953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.029998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.030009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.030026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.030037 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134025 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.134134 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.236937 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.339345 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441420 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.441490 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.544250 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646677 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.646687 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.748980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.749094 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.843957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.844093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.858511 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.867571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.878581 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:25:21.68190389 +0000 UTC Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.882860 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.882860 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.883018 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.883310 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.884884 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.884962 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.885017 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.887066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887239 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.887209994 +0000 UTC m=+147.298319963 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.887336 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.887530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887545 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887626 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.887603905 +0000 UTC m=+147.298713864 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887687 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.887759 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.887735388 +0000 UTC m=+147.298845347 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889613 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.889655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.905346 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.910547 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.931350 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936208 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.936240 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.951975 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T13:45:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c179e315-653f-44a2-90da-146c8bca7b57\\\",\\\"systemUUID\\\":\\\"a3bb8776-1087-4679-a96f-5f1347bd430e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T13:45:03Z is after 2025-08-24T17:21:41Z" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.952130 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.953935 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:03Z","lastTransitionTime":"2026-01-22T13:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.988804 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:03 crc kubenswrapper[4769]: I0122 13:45:03.988865 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.988967 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.988982 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.988993 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989013 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989042 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989054 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989042 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.989029699 +0000 UTC m=+147.400139628 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:03 crc kubenswrapper[4769]: E0122 13:45:03.989110 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 13:46:07.989096041 +0000 UTC m=+147.400205970 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056679 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.056713 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.159758 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.262169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364897 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364948 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.364983 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468096 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.468241 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.571294 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.673774 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776305 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776377 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776397 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.776411 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.878829 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:02:10.581141434 +0000 UTC Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.879333 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.882750 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:04 crc kubenswrapper[4769]: E0122 13:45:04.883116 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:04 crc kubenswrapper[4769]: I0122 13:45:04.981267 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:04Z","lastTransitionTime":"2026-01-22T13:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.083832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.186982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.187353 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.290684 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.393720 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.497497 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.600905 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704303 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.704331 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.807514 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.879015 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:34:44.391885898 +0000 UTC Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.882238 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.882301 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.882300 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:05 crc kubenswrapper[4769]: E0122 13:45:05.882388 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:05 crc kubenswrapper[4769]: E0122 13:45:05.882644 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:05 crc kubenswrapper[4769]: E0122 13:45:05.882745 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:05 crc kubenswrapper[4769]: I0122 13:45:05.917184 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:05Z","lastTransitionTime":"2026-01-22T13:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020851 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.020941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.123772 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.226216 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.328941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431125 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.431142 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.534199 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.636883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637039 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.637132 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740085 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.740219 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.842576 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.879272 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 15:43:38.698361382 +0000 UTC Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.882854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:06 crc kubenswrapper[4769]: E0122 13:45:06.883031 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:06 crc kubenswrapper[4769]: I0122 13:45:06.945595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:06Z","lastTransitionTime":"2026-01-22T13:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.049199 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.151185 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.254554 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.357347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.459964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.460124 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.563350 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.666446 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.768992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.769060 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.871570 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.879565 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:54:22.06855861 +0000 UTC Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.882864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.882878 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:07 crc kubenswrapper[4769]: E0122 13:45:07.882960 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.882864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:07 crc kubenswrapper[4769]: E0122 13:45:07.883039 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:07 crc kubenswrapper[4769]: E0122 13:45:07.883105 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:07 crc kubenswrapper[4769]: I0122 13:45:07.974922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:07Z","lastTransitionTime":"2026-01-22T13:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.077750 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.181616 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.285207 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.387546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490650 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.490681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.593230 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.696347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.798989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.799119 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.879987 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:00:50.451707017 +0000 UTC Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.883449 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:08 crc kubenswrapper[4769]: E0122 13:45:08.883740 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901665 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:08 crc kubenswrapper[4769]: I0122 13:45:08.901676 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:08Z","lastTransitionTime":"2026-01-22T13:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.004884 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.107744 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.211371 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.314626 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417952 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.417961 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.520343 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.624142 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.726833 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.830187 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.880947 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 13:36:06.389180386 +0000 UTC Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.882450 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.882460 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:09 crc kubenswrapper[4769]: E0122 13:45:09.882594 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.882601 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:09 crc kubenswrapper[4769]: E0122 13:45:09.882722 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:09 crc kubenswrapper[4769]: E0122 13:45:09.882762 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:09 crc kubenswrapper[4769]: I0122 13:45:09.933607 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:09Z","lastTransitionTime":"2026-01-22T13:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.037143 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.140941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.141071 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.244665 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348421 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.348519 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451432 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.451482 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.555347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.658683 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.761834 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.865214 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.881832 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:19:39.860239255 +0000 UTC Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.883253 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:10 crc kubenswrapper[4769]: E0122 13:45:10.883563 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.920744 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-x582x" podStartSLOduration=71.920720117 podStartE2EDuration="1m11.920720117s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:10.919592857 +0000 UTC m=+90.330702796" watchObservedRunningTime="2026-01-22 13:45:10.920720117 +0000 UTC m=+90.331830046" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.937714 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fclh4" podStartSLOduration=70.937694511 podStartE2EDuration="1m10.937694511s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:10.936387616 +0000 UTC m=+90.347497545" watchObservedRunningTime="2026-01-22 13:45:10.937694511 +0000 UTC m=+90.348804440" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.970634 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pwktf" podStartSLOduration=70.970606211 podStartE2EDuration="1m10.970606211s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:10.953321599 +0000 UTC m=+90.364431558" watchObservedRunningTime="2026-01-22 13:45:10.970606211 +0000 UTC m=+90.381716170" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:10 crc kubenswrapper[4769]: I0122 13:45:10.975193 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:10Z","lastTransitionTime":"2026-01-22T13:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.007690 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d9wdl" podStartSLOduration=71.007668103 podStartE2EDuration="1m11.007668103s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.006560323 +0000 UTC m=+90.417670292" watchObservedRunningTime="2026-01-22 13:45:11.007668103 +0000 UTC m=+90.418778072" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.077133 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.088348 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.088329181 podStartE2EDuration="37.088329181s" podCreationTimestamp="2026-01-22 13:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.087493909 +0000 UTC m=+90.498603858" watchObservedRunningTime="2026-01-22 13:45:11.088329181 +0000 UTC m=+90.499439110" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.111905 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=71.111889022 podStartE2EDuration="1m11.111889022s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.110934816 +0000 UTC m=+90.522044755" watchObservedRunningTime="2026-01-22 13:45:11.111889022 +0000 UTC m=+90.522998951" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.127832 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.127814597 podStartE2EDuration="1m13.127814597s" podCreationTimestamp="2026-01-22 13:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.126580535 +0000 UTC m=+90.537690474" watchObservedRunningTime="2026-01-22 13:45:11.127814597 +0000 UTC m=+90.538924546" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178677 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.178868 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.196917 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.196896357 podStartE2EDuration="12.196896357s" podCreationTimestamp="2026-01-22 13:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.177083247 +0000 UTC m=+90.588193186" watchObservedRunningTime="2026-01-22 13:45:11.196896357 +0000 UTC m=+90.608006296" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.209764 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.20974641 podStartE2EDuration="1m11.20974641s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.197909863 +0000 UTC m=+90.609019802" watchObservedRunningTime="2026-01-22 13:45:11.20974641 +0000 UTC m=+90.620856339" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.252620 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podStartSLOduration=72.252599327 podStartE2EDuration="1m12.252599327s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.252439682 +0000 UTC m=+90.663549641" watchObservedRunningTime="2026-01-22 13:45:11.252599327 +0000 UTC m=+90.663709256" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.281747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384439 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.384518 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.486899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.486959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.486990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.487007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.487019 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.589959 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.692706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.693541 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.797730 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882737 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:12:45.712138633 +0000 UTC Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882922 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882964 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:11 crc kubenswrapper[4769]: E0122 13:45:11.883385 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.882985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:11 crc kubenswrapper[4769]: E0122 13:45:11.883530 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:11 crc kubenswrapper[4769]: E0122 13:45:11.883648 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:11 crc kubenswrapper[4769]: I0122 13:45:11.901478 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:11Z","lastTransitionTime":"2026-01-22T13:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.004921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.005099 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108429 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.108552 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.211541 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.313889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.313970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.313993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.314023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.314045 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416753 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416834 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.416846 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520604 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.520649 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.624369 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726457 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.726952 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.830876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.830955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.830975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.831005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.831026 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.883131 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.883196 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 03:52:29.663298278 +0000 UTC Jan 22 13:45:12 crc kubenswrapper[4769]: E0122 13:45:12.884336 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.933417 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.933868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.934051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.934199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:12 crc kubenswrapper[4769]: I0122 13:45:12.934354 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:12Z","lastTransitionTime":"2026-01-22T13:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.037838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.038361 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141626 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.141730 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244422 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.244466 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.348668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.349579 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452634 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452741 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.452869 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556679 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.556758 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659427 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659448 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.659463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762737 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.762766 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865851 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.865888 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.882225 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.882638 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.882695 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.882829 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.883184 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.883298 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:48:17.929275016 +0000 UTC Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.883669 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.884045 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:45:13 crc kubenswrapper[4769]: E0122 13:45:13.884333 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:13 crc kubenswrapper[4769]: I0122 13:45:13.967915 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:13Z","lastTransitionTime":"2026-01-22T13:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071211 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.071259 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:14Z","lastTransitionTime":"2026-01-22T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.088216 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T13:45:14Z","lastTransitionTime":"2026-01-22T13:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.147042 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bqn6j" podStartSLOduration=75.147016736 podStartE2EDuration="1m15.147016736s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:11.26618668 +0000 UTC m=+90.677296619" watchObservedRunningTime="2026-01-22 13:45:14.147016736 +0000 UTC m=+93.558126685" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.149014 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd"] Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.149691 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.151423 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.151952 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.152703 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.158076 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302711 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f5925a8-3697-41cf-8d8c-6fded7005054-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302854 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302902 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f5925a8-3697-41cf-8d8c-6fded7005054-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.302968 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f5925a8-3697-41cf-8d8c-6fded7005054-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.303036 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404277 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f5925a8-3697-41cf-8d8c-6fded7005054-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f5925a8-3697-41cf-8d8c-6fded7005054-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404442 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f5925a8-3697-41cf-8d8c-6fded7005054-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404467 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404567 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.404686 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4f5925a8-3697-41cf-8d8c-6fded7005054-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.406262 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4f5925a8-3697-41cf-8d8c-6fded7005054-service-ca\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.416974 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f5925a8-3697-41cf-8d8c-6fded7005054-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.432852 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f5925a8-3697-41cf-8d8c-6fded7005054-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-7r9qd\" (UID: \"4f5925a8-3697-41cf-8d8c-6fded7005054\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.472218 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" Jan 22 13:45:14 crc kubenswrapper[4769]: W0122 13:45:14.498105 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f5925a8_3697_41cf_8d8c_6fded7005054.slice/crio-12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c WatchSource:0}: Error finding container 12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c: Status 404 returned error can't find the container with id 12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.882513 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:14 crc kubenswrapper[4769]: E0122 13:45:14.882887 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.883633 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:08:09.28041123 +0000 UTC Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.883762 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 13:45:14 crc kubenswrapper[4769]: I0122 13:45:14.896937 4769 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.464869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" event={"ID":"4f5925a8-3697-41cf-8d8c-6fded7005054","Type":"ContainerStarted","Data":"eaf1b242727cf1d1d8a5c0cf11d0f575370fb51b6259f51fe5fe18e636094896"} Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.465287 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" event={"ID":"4f5925a8-3697-41cf-8d8c-6fded7005054","Type":"ContainerStarted","Data":"12e9859eb28bb4f58bbaab620a7429dff5f137685c7007865bfc5f292cabba8c"} Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.883349 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.883391 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:15 crc kubenswrapper[4769]: I0122 13:45:15.883358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:15 crc kubenswrapper[4769]: E0122 13:45:15.883509 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:15 crc kubenswrapper[4769]: E0122 13:45:15.883574 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:15 crc kubenswrapper[4769]: E0122 13:45:15.883695 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:16 crc kubenswrapper[4769]: I0122 13:45:16.882283 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:16 crc kubenswrapper[4769]: E0122 13:45:16.882420 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:17 crc kubenswrapper[4769]: I0122 13:45:17.883282 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:17 crc kubenswrapper[4769]: I0122 13:45:17.883326 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:17 crc kubenswrapper[4769]: I0122 13:45:17.883305 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:17 crc kubenswrapper[4769]: E0122 13:45:17.883419 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:17 crc kubenswrapper[4769]: E0122 13:45:17.883491 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:17 crc kubenswrapper[4769]: E0122 13:45:17.883557 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:18 crc kubenswrapper[4769]: I0122 13:45:18.646210 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:18 crc kubenswrapper[4769]: E0122 13:45:18.646379 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:45:18 crc kubenswrapper[4769]: E0122 13:45:18.646424 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs podName:9764ff0b-ae92-470b-af85-7c8bb41642ba nodeName:}" failed. No retries permitted until 2026-01-22 13:46:22.64641082 +0000 UTC m=+162.057520749 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs") pod "network-metrics-daemon-cfh49" (UID: "9764ff0b-ae92-470b-af85-7c8bb41642ba") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 13:45:18 crc kubenswrapper[4769]: I0122 13:45:18.883187 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:18 crc kubenswrapper[4769]: E0122 13:45:18.883310 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:19 crc kubenswrapper[4769]: I0122 13:45:19.882875 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:19 crc kubenswrapper[4769]: I0122 13:45:19.882928 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:19 crc kubenswrapper[4769]: I0122 13:45:19.882940 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:19 crc kubenswrapper[4769]: E0122 13:45:19.883079 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:19 crc kubenswrapper[4769]: E0122 13:45:19.883190 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:19 crc kubenswrapper[4769]: E0122 13:45:19.883397 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:20 crc kubenswrapper[4769]: I0122 13:45:20.883061 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:20 crc kubenswrapper[4769]: E0122 13:45:20.884051 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:21 crc kubenswrapper[4769]: I0122 13:45:21.882381 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:21 crc kubenswrapper[4769]: E0122 13:45:21.882509 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:21 crc kubenswrapper[4769]: I0122 13:45:21.882581 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:21 crc kubenswrapper[4769]: I0122 13:45:21.882699 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:21 crc kubenswrapper[4769]: E0122 13:45:21.882741 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:21 crc kubenswrapper[4769]: E0122 13:45:21.882997 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:22 crc kubenswrapper[4769]: I0122 13:45:22.882853 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:22 crc kubenswrapper[4769]: E0122 13:45:22.883023 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:23 crc kubenswrapper[4769]: I0122 13:45:23.883166 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:23 crc kubenswrapper[4769]: I0122 13:45:23.883201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:23 crc kubenswrapper[4769]: E0122 13:45:23.883287 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:23 crc kubenswrapper[4769]: I0122 13:45:23.883166 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:23 crc kubenswrapper[4769]: E0122 13:45:23.883380 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:23 crc kubenswrapper[4769]: E0122 13:45:23.883442 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:24 crc kubenswrapper[4769]: I0122 13:45:24.882908 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:24 crc kubenswrapper[4769]: E0122 13:45:24.883041 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:25 crc kubenswrapper[4769]: I0122 13:45:25.883185 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:25 crc kubenswrapper[4769]: I0122 13:45:25.883261 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:25 crc kubenswrapper[4769]: E0122 13:45:25.883537 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:25 crc kubenswrapper[4769]: E0122 13:45:25.883670 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:25 crc kubenswrapper[4769]: I0122 13:45:25.883211 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:25 crc kubenswrapper[4769]: E0122 13:45:25.883984 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:26 crc kubenswrapper[4769]: I0122 13:45:26.882619 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:26 crc kubenswrapper[4769]: E0122 13:45:26.883126 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:27 crc kubenswrapper[4769]: I0122 13:45:27.882581 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:27 crc kubenswrapper[4769]: I0122 13:45:27.882672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:27 crc kubenswrapper[4769]: E0122 13:45:27.882736 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:27 crc kubenswrapper[4769]: E0122 13:45:27.883124 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:27 crc kubenswrapper[4769]: I0122 13:45:27.883859 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:27 crc kubenswrapper[4769]: E0122 13:45:27.883988 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:28 crc kubenswrapper[4769]: I0122 13:45:28.882343 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:28 crc kubenswrapper[4769]: E0122 13:45:28.882834 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:28 crc kubenswrapper[4769]: I0122 13:45:28.882963 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:45:28 crc kubenswrapper[4769]: E0122 13:45:28.883719 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jrg8z_openshift-ovn-kubernetes(9c028db8-99b9-422d-ba46-e1a2db06ce3c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" Jan 22 13:45:29 crc kubenswrapper[4769]: I0122 13:45:29.882606 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:29 crc kubenswrapper[4769]: I0122 13:45:29.882672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:29 crc kubenswrapper[4769]: E0122 13:45:29.882722 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:29 crc kubenswrapper[4769]: I0122 13:45:29.882823 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:29 crc kubenswrapper[4769]: E0122 13:45:29.882963 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:29 crc kubenswrapper[4769]: E0122 13:45:29.883030 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:30 crc kubenswrapper[4769]: I0122 13:45:30.884076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:30 crc kubenswrapper[4769]: E0122 13:45:30.884910 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:31 crc kubenswrapper[4769]: I0122 13:45:31.882765 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:31 crc kubenswrapper[4769]: I0122 13:45:31.882820 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:31 crc kubenswrapper[4769]: I0122 13:45:31.882848 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:31 crc kubenswrapper[4769]: E0122 13:45:31.882930 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:31 crc kubenswrapper[4769]: E0122 13:45:31.883055 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:31 crc kubenswrapper[4769]: E0122 13:45:31.883189 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:32 crc kubenswrapper[4769]: I0122 13:45:32.885275 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:32 crc kubenswrapper[4769]: E0122 13:45:32.886083 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.523960 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524705 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/0.log" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524780 4769 generic.go:334] "Generic (PLEG): container finished" podID="d4186e93-df8a-49d3-9068-c8b8acd05baa" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" exitCode=1 Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524874 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerDied","Data":"ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8"} Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.524929 4769 scope.go:117] "RemoveContainer" containerID="f4e835bfb6d47d3628a5f67cb226a00d51c3eebec57de5db55a54406bf1e6122" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.525711 4769 scope.go:117] "RemoveContainer" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.526055 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fclh4_openshift-multus(d4186e93-df8a-49d3-9068-c8b8acd05baa)\"" pod="openshift-multus/multus-fclh4" podUID="d4186e93-df8a-49d3-9068-c8b8acd05baa" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.554685 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-7r9qd" podStartSLOduration=93.554666563 podStartE2EDuration="1m33.554666563s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:15.487600317 +0000 UTC m=+94.898710256" watchObservedRunningTime="2026-01-22 13:45:33.554666563 +0000 UTC m=+112.965776512" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.882698 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.882863 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.882726 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.882929 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:33 crc kubenswrapper[4769]: I0122 13:45:33.882703 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:33 crc kubenswrapper[4769]: E0122 13:45:33.882984 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:34 crc kubenswrapper[4769]: I0122 13:45:34.528953 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:45:34 crc kubenswrapper[4769]: I0122 13:45:34.882933 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:34 crc kubenswrapper[4769]: E0122 13:45:34.883211 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:35 crc kubenswrapper[4769]: I0122 13:45:35.883087 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:35 crc kubenswrapper[4769]: I0122 13:45:35.883198 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:35 crc kubenswrapper[4769]: I0122 13:45:35.883116 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:35 crc kubenswrapper[4769]: E0122 13:45:35.883253 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:35 crc kubenswrapper[4769]: E0122 13:45:35.883416 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:35 crc kubenswrapper[4769]: E0122 13:45:35.883519 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:36 crc kubenswrapper[4769]: I0122 13:45:36.882426 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:36 crc kubenswrapper[4769]: E0122 13:45:36.882581 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:37 crc kubenswrapper[4769]: I0122 13:45:37.882757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:37 crc kubenswrapper[4769]: E0122 13:45:37.883611 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:37 crc kubenswrapper[4769]: I0122 13:45:37.883969 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:37 crc kubenswrapper[4769]: E0122 13:45:37.884207 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:37 crc kubenswrapper[4769]: I0122 13:45:37.884483 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:37 crc kubenswrapper[4769]: E0122 13:45:37.884643 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:38 crc kubenswrapper[4769]: I0122 13:45:38.883140 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:38 crc kubenswrapper[4769]: E0122 13:45:38.883332 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:39 crc kubenswrapper[4769]: I0122 13:45:39.882379 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:39 crc kubenswrapper[4769]: I0122 13:45:39.882434 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:39 crc kubenswrapper[4769]: E0122 13:45:39.882512 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:39 crc kubenswrapper[4769]: E0122 13:45:39.882604 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:39 crc kubenswrapper[4769]: I0122 13:45:39.882973 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:39 crc kubenswrapper[4769]: E0122 13:45:39.883186 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:40 crc kubenswrapper[4769]: I0122 13:45:40.882740 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:40 crc kubenswrapper[4769]: E0122 13:45:40.884431 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:40 crc kubenswrapper[4769]: E0122 13:45:40.909370 4769 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.007933 4769 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.882519 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.882514 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.882667 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.882729 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.882543 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:41 crc kubenswrapper[4769]: E0122 13:45:41.882981 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:41 crc kubenswrapper[4769]: I0122 13:45:41.884937 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.558395 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.561716 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerStarted","Data":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.562853 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.590866 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podStartSLOduration=102.590848122 podStartE2EDuration="1m42.590848122s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:42.588408597 +0000 UTC m=+121.999518536" watchObservedRunningTime="2026-01-22 13:45:42.590848122 +0000 UTC m=+122.001958051" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.883138 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:42 crc kubenswrapper[4769]: E0122 13:45:42.883342 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:42 crc kubenswrapper[4769]: I0122 13:45:42.968941 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cfh49"] Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.565526 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.565637 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.886352 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.886467 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:43 crc kubenswrapper[4769]: I0122 13:45:43.886509 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.886675 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.886809 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:43 crc kubenswrapper[4769]: E0122 13:45:43.886880 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:44 crc kubenswrapper[4769]: I0122 13:45:44.883329 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:44 crc kubenswrapper[4769]: E0122 13:45:44.883469 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:45 crc kubenswrapper[4769]: I0122 13:45:45.882962 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:45 crc kubenswrapper[4769]: I0122 13:45:45.883020 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:45 crc kubenswrapper[4769]: I0122 13:45:45.882974 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:45 crc kubenswrapper[4769]: E0122 13:45:45.883118 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:45 crc kubenswrapper[4769]: E0122 13:45:45.883238 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:45 crc kubenswrapper[4769]: E0122 13:45:45.883379 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:46 crc kubenswrapper[4769]: E0122 13:45:46.009560 4769 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 13:45:46 crc kubenswrapper[4769]: I0122 13:45:46.883485 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:46 crc kubenswrapper[4769]: E0122 13:45:46.883895 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:46 crc kubenswrapper[4769]: I0122 13:45:46.884142 4769 scope.go:117] "RemoveContainer" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.578805 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.579189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3"} Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.882847 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.882932 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:47 crc kubenswrapper[4769]: E0122 13:45:47.882970 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:47 crc kubenswrapper[4769]: I0122 13:45:47.882936 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:47 crc kubenswrapper[4769]: E0122 13:45:47.883073 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:47 crc kubenswrapper[4769]: E0122 13:45:47.883206 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:48 crc kubenswrapper[4769]: I0122 13:45:48.882506 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:48 crc kubenswrapper[4769]: E0122 13:45:48.882670 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:49 crc kubenswrapper[4769]: I0122 13:45:49.883162 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:49 crc kubenswrapper[4769]: I0122 13:45:49.883193 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:49 crc kubenswrapper[4769]: I0122 13:45:49.883224 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:49 crc kubenswrapper[4769]: E0122 13:45:49.883278 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 13:45:49 crc kubenswrapper[4769]: E0122 13:45:49.883410 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 13:45:49 crc kubenswrapper[4769]: E0122 13:45:49.883598 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 13:45:50 crc kubenswrapper[4769]: I0122 13:45:50.882575 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:50 crc kubenswrapper[4769]: E0122 13:45:50.883608 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cfh49" podUID="9764ff0b-ae92-470b-af85-7c8bb41642ba" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.882755 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.882745 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.882913 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885194 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885231 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885436 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 13:45:51 crc kubenswrapper[4769]: I0122 13:45:51.885496 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 13:45:52 crc kubenswrapper[4769]: I0122 13:45:52.882950 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:45:52 crc kubenswrapper[4769]: I0122 13:45:52.886295 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 13:45:52 crc kubenswrapper[4769]: I0122 13:45:52.886978 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.152614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.205601 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dltl2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.206435 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.207232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.207909 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.211254 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.211295 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-65brj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.211714 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.212550 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.213924 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.214984 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.224095 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.224461 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225203 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225486 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225345 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.225938 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.227340 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.240813 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.241313 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.241703 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.241971 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.242186 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.242496 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.242669 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243377 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243948 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.243975 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.245480 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.245902 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.246249 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v24vn"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.247373 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.249857 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjt2k"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.250669 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.250989 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.259868 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.260508 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.260893 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.260919 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.261857 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.262286 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.262475 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.262846 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.263344 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264008 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264129 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264129 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264711 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.264966 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265466 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265745 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2vm4g"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265045 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266156 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266239 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266419 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.266683 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.267092 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.267186 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.265916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.269723 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.270077 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.270416 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.270744 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.271311 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.271580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272044 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mgft7"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272287 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272532 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.272940 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273064 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273324 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273418 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273585 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273735 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274004 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274160 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274268 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274362 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274464 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.274564 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.273752 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.276077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275175 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275166 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.276276 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275359 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275429 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275473 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275629 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.275667 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.276776 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkbvd"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.304974 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.277343 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280636 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280780 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280860 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.280977 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281009 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281203 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281245 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281302 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281346 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281357 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281395 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281392 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281436 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281435 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281474 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281510 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.281511 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.299411 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.299487 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.302478 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.303015 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.319316 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.321527 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.321669 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.322482 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.322698 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.323361 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.325168 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.325336 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.325500 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.326730 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327218 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327391 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtbb\" (UniqueName: \"kubernetes.io/projected/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-kube-api-access-vbtbb\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327648 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327714 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328087 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328168 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-node-pullsecrets\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328359 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-config\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328445 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328530 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-config\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328622 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-auth-proxy-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328700 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328763 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-serving-cert\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.328855 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.332932 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333073 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-encryption-config\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333164 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333414 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-service-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333483 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333570 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333645 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a5be64-af9a-4376-9105-c36371ad5069-audit-dir\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333722 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-config\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333848 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.333939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-etcd-client\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334012 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.329140 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dltl2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334163 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-65brj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334197 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334212 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334227 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mm5p"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.329203 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.327682 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.334084 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335054 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335013 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335130 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335464 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5758b1f6-5135-428d-ad0b-6892a49d1800-serving-cert\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c2lv\" (UniqueName: \"kubernetes.io/projected/92eb7fb7-d1b8-45ad-b8ff-8411d04eb048-kube-api-access-4c2lv\") pod \"downloads-7954f5f757-mgft7\" (UID: \"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048\") " pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335622 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335761 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zdt\" (UniqueName: \"kubernetes.io/projected/81a5be64-af9a-4376-9105-c36371ad5069-kube-api-access-85zdt\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335842 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb62s\" (UniqueName: \"kubernetes.io/projected/15723c66-27d3-4cea-9962-e75bbe7bb967-kube-api-access-nb62s\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335912 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-audit-policies\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.335986 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336091 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-image-import-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d9p\" (UniqueName: \"kubernetes.io/projected/52f284ae-bace-4bd8-8140-7f37fbad55d4-kube-api-access-r6d9p\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336266 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336434 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ckx\" (UniqueName: \"kubernetes.io/projected/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-kube-api-access-m6ckx\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336598 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336692 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336802 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjwfr\" (UniqueName: \"kubernetes.io/projected/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-kube-api-access-qjwfr\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.336895 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-audit\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337018 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-client\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337250 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337335 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-encryption-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337402 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337465 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337528 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337601 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce7607b6-0e74-47ba-8875-057821862224-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337676 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52f284ae-bace-4bd8-8140-7f37fbad55d4-serving-cert\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337825 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-serving-cert\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.337895 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-machine-approver-tls\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338017 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz27q\" (UniqueName: \"kubernetes.io/projected/c1a96247-d002-4f96-9695-16a4011f3ad5-kube-api-access-kz27q\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338304 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6kks\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-kube-api-access-r6kks\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338396 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce7607b6-0e74-47ba-8875-057821862224-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338495 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.338604 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a96247-d002-4f96-9695-16a4011f3ad5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339066 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339278 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-serving-cert\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339523 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339643 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwl46\" (UniqueName: \"kubernetes.io/projected/5758b1f6-5135-428d-ad0b-6892a49d1800-kube-api-access-wwl46\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339775 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8n48\" (UniqueName: \"kubernetes.io/projected/43448f45-644f-4b5a-aa06-567b5c8f8279-kube-api-access-l8n48\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxw4z\" (UniqueName: \"kubernetes.io/projected/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-kube-api-access-hxw4z\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339978 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.340300 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339580 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.340773 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.339973 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.340425 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.341524 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342144 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342279 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342474 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a96247-d002-4f96-9695-16a4011f3ad5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.342596 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-images\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344237 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-client\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-audit-dir\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344430 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-config\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-trusted-ca\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343876 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343606 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344582 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343665 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343711 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.343829 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.345065 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pb7qw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.344167 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.345692 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.345909 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.349922 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.350555 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.364677 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.368656 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.368903 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.371313 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.371822 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.371978 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.377018 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.379565 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ds5qk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.379760 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.381206 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.390479 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.390625 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.391165 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.391958 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.392874 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.393467 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.393988 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.395658 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.396642 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.400471 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.401147 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.401643 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.402834 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.403030 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.403978 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.404146 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.404710 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.405503 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.406107 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.407829 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.408592 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.408949 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcpwt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.409552 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.409711 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.410227 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5qtks"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.411238 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.411432 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.412438 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.412669 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2vm4g"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.415551 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.418331 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.419646 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.421288 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mgft7"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.424242 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.428105 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.428145 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.429828 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mm5p"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.436587 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.436634 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v24vn"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.437569 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.438803 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ds5qk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.439865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjt2k"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.444503 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.445884 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkbvd"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446455 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446585 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-service-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446605 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a5be64-af9a-4376-9105-c36371ad5069-audit-dir\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446690 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-profile-collector-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446704 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-config\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-etcd-client\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446814 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8b75cc3-465e-4542-82ee-4950744e89a0-metrics-tls\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446842 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446883 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5758b1f6-5135-428d-ad0b-6892a49d1800-serving-cert\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446902 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz965\" (UniqueName: \"kubernetes.io/projected/5c5cf556-ec03-4f29-94ed-13a58f54275c-kube-api-access-rz965\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446925 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.446964 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447066 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85zdt\" (UniqueName: \"kubernetes.io/projected/81a5be64-af9a-4376-9105-c36371ad5069-kube-api-access-85zdt\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447095 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c2lv\" (UniqueName: \"kubernetes.io/projected/92eb7fb7-d1b8-45ad-b8ff-8411d04eb048-kube-api-access-4c2lv\") pod \"downloads-7954f5f757-mgft7\" (UID: \"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048\") " pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447120 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb62s\" (UniqueName: \"kubernetes.io/projected/15723c66-27d3-4cea-9962-e75bbe7bb967-kube-api-access-nb62s\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447143 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-audit-policies\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-image-import-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d9p\" (UniqueName: \"kubernetes.io/projected/52f284ae-bace-4bd8-8140-7f37fbad55d4-kube-api-access-r6d9p\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447232 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4v8\" (UniqueName: \"kubernetes.io/projected/db7a69ec-2a82-4f9b-b83a-42237a02087e-kube-api-access-qb4v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447283 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ckx\" (UniqueName: \"kubernetes.io/projected/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-kube-api-access-m6ckx\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447385 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjwfr\" (UniqueName: \"kubernetes.io/projected/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-kube-api-access-qjwfr\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447408 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba0299e2-1902-461d-bf42-f3d5dfe205ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447430 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db7a69ec-2a82-4f9b-b83a-42237a02087e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447489 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-config\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447494 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447494 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/81a5be64-af9a-4376-9105-c36371ad5069-audit-dir\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447554 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-audit\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447578 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-client\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447598 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447619 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-srv-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447642 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-metrics-certs\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447689 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-encryption-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447733 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447776 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce7607b6-0e74-47ba-8875-057821862224-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447817 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447838 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52f284ae-bace-4bd8-8140-7f37fbad55d4-serving-cert\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447904 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447926 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2pzt\" (UniqueName: \"kubernetes.io/projected/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-kube-api-access-p2pzt\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447950 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-serving-cert\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.447971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjgq6\" (UniqueName: \"kubernetes.io/projected/ba0299e2-1902-461d-bf42-f3d5dfe205ff-kube-api-access-wjgq6\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-machine-approver-tls\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448039 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz27q\" (UniqueName: \"kubernetes.io/projected/c1a96247-d002-4f96-9695-16a4011f3ad5-kube-api-access-kz27q\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448073 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6kks\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-kube-api-access-r6kks\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448088 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce7607b6-0e74-47ba-8875-057821862224-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448105 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a96247-d002-4f96-9695-16a4011f3ad5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448106 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448121 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448138 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448153 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448169 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-serving-cert\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwl46\" (UniqueName: \"kubernetes.io/projected/5758b1f6-5135-428d-ad0b-6892a49d1800-kube-api-access-wwl46\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448219 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448244 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxw4z\" (UniqueName: \"kubernetes.io/projected/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-kube-api-access-hxw4z\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448267 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448291 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8n48\" (UniqueName: \"kubernetes.io/projected/43448f45-644f-4b5a-aa06-567b5c8f8279-kube-api-access-l8n48\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448313 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a96247-d002-4f96-9695-16a4011f3ad5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448332 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-images\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448352 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-client\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-audit-dir\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-config\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448419 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-trusted-ca\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448466 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk48n\" (UniqueName: \"kubernetes.io/projected/d8b75cc3-465e-4542-82ee-4950744e89a0-kube-api-access-vk48n\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448488 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448514 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-default-certificate\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448537 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448548 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-service-ca\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrvt\" (UniqueName: \"kubernetes.io/projected/81769776-c586-45a0-a9ed-42ce4789bb28-kube-api-access-cvrvt\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448602 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448629 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtbb\" (UniqueName: \"kubernetes.io/projected/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-kube-api-access-vbtbb\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448651 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448673 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448692 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448714 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448758 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-node-pullsecrets\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.448783 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db199c04-6231-46b3-a4e7-5cd74604b005-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449023 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-config\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449053 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-config\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-auth-proxy-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449151 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-serving-cert\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449198 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449220 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-encryption-config\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449337 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5cf556-ec03-4f29-94ed-13a58f54275c-service-ca-bundle\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449365 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-stats-auth\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.449900 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.450053 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.451406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-service-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.451642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.452397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-images\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.452568 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.452777 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.453287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-audit\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.453371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.454860 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455084 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455373 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455579 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455611 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455657 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-audit-dir\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.455715 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.468303 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.468907 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-etcd-client\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.469287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-config\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.469667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-encryption-config\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.470288 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.470309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-serving-cert\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.470960 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/81a5be64-af9a-4376-9105-c36371ad5069-audit-policies\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471048 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471112 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/15723c66-27d3-4cea-9962-e75bbe7bb967-node-pullsecrets\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471320 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.471659 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-config\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.472637 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.472762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-serving-cert\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.472880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/ce7607b6-0e74-47ba-8875-057821862224-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-trusted-ca\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473514 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473769 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.473991 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-serving-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474022 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce7607b6-0e74-47ba-8875-057821862224-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474219 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/43448f45-644f-4b5a-aa06-567b5c8f8279-etcd-client\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474319 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1a96247-d002-4f96-9695-16a4011f3ad5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474639 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-machine-approver-tls\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.474966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5758b1f6-5135-428d-ad0b-6892a49d1800-serving-cert\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475337 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1a96247-d002-4f96-9695-16a4011f3ad5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475640 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/15723c66-27d3-4cea-9962-e75bbe7bb967-image-import-ca\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475862 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475348 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52f284ae-bace-4bd8-8140-7f37fbad55d4-serving-cert\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.475906 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5758b1f6-5135-428d-ad0b-6892a49d1800-config\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476191 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476370 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-auth-proxy-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476623 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476721 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-config\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52f284ae-bace-4bd8-8140-7f37fbad55d4-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.476868 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.477192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.477880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-etcd-client\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478147 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.478760 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.479395 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.480386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.480538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-encryption-config\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.480734 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.481294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15723c66-27d3-4cea-9962-e75bbe7bb967-serving-cert\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.483135 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.483564 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.483938 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.484429 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a5be64-af9a-4376-9105-c36371ad5069-serving-cert\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.485037 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.486022 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.487031 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.488134 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.489446 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.489979 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcpwt"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.491007 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.492084 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.493100 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.494077 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ggj4q"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.494598 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.495063 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdxvs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.496061 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.496453 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.497483 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdxvs"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.498467 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5qtks"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.499716 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rkk84"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.500358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.500715 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rkk84"] Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.516580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.530477 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550070 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db199c04-6231-46b3-a4e7-5cd74604b005-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550139 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5cf556-ec03-4f29-94ed-13a58f54275c-service-ca-bundle\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550164 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-stats-auth\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550213 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-profile-collector-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550234 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550264 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8b75cc3-465e-4542-82ee-4950744e89a0-metrics-tls\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550288 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz965\" (UniqueName: \"kubernetes.io/projected/5c5cf556-ec03-4f29-94ed-13a58f54275c-kube-api-access-rz965\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550386 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4v8\" (UniqueName: \"kubernetes.io/projected/db7a69ec-2a82-4f9b-b83a-42237a02087e-kube-api-access-qb4v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba0299e2-1902-461d-bf42-f3d5dfe205ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550466 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db7a69ec-2a82-4f9b-b83a-42237a02087e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550488 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-srv-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550492 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550507 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-metrics-certs\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2pzt\" (UniqueName: \"kubernetes.io/projected/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-kube-api-access-p2pzt\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550587 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjgq6\" (UniqueName: \"kubernetes.io/projected/ba0299e2-1902-461d-bf42-f3d5dfe205ff-kube-api-access-wjgq6\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550662 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk48n\" (UniqueName: \"kubernetes.io/projected/d8b75cc3-465e-4542-82ee-4950744e89a0-kube-api-access-vk48n\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550684 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-default-certificate\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.550717 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrvt\" (UniqueName: \"kubernetes.io/projected/81769776-c586-45a0-a9ed-42ce4789bb28-kube-api-access-cvrvt\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.571363 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.589945 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.610075 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.629664 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.650980 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.670521 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.682927 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba0299e2-1902-461d-bf42-f3d5dfe205ff-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.690258 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.710574 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.730165 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.750366 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.770855 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.790193 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.794376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-srv-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.810103 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.814699 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81769776-c586-45a0-a9ed-42ce4789bb28-profile-collector-cert\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.830077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.851093 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.860331 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-stats-auth\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.871133 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.891267 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.896088 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-default-certificate\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.911595 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.931480 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.945928 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c5cf556-ec03-4f29-94ed-13a58f54275c-metrics-certs\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.950919 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.953759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5cf556-ec03-4f29-94ed-13a58f54275c-service-ca-bundle\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.970462 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 13:45:55 crc kubenswrapper[4769]: I0122 13:45:55.990776 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.010381 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.030706 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.051033 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.070318 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.091108 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.110031 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.130116 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.151693 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.170612 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.190274 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.210065 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.216731 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8b75cc3-465e-4542-82ee-4950744e89a0-metrics-tls\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.230667 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.250966 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.256734 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/db7a69ec-2a82-4f9b-b83a-42237a02087e-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.270947 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.290693 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.311105 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.331218 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.350415 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.371082 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.390410 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.409235 4769 request.go:700] Waited for 1.014828145s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.410982 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.441102 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.451280 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.471616 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.491846 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.511205 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.530529 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551333 4769 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551415 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config podName:db199c04-6231-46b3-a4e7-5cd74604b005 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:57.051396758 +0000 UTC m=+136.462506687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-28gzs" (UID: "db199c04-6231-46b3-a4e7-5cd74604b005") : failed to sync configmap cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551414 4769 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.551481 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551511 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert podName:e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:57.051485592 +0000 UTC m=+136.462595521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-jr9vm" (UID: "e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43") : failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551434 4769 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: E0122 13:45:56.551561 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert podName:db199c04-6231-46b3-a4e7-5cd74604b005 nodeName:}" failed. No retries permitted until 2026-01-22 13:45:57.051553184 +0000 UTC m=+136.462663263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-28gzs" (UID: "db199c04-6231-46b3-a4e7-5cd74604b005") : failed to sync secret cache: timed out waiting for the condition Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.571039 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.592281 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.612762 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.630870 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.650116 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.671315 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.691167 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.710879 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.730415 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.751127 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.771918 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.791185 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.811197 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.830035 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.850167 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.878509 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.891270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.910210 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.931614 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.950609 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.970738 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 13:45:56 crc kubenswrapper[4769]: I0122 13:45:56.991863 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.010321 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.029744 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.050923 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.070941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.071196 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.071231 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.071574 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.073330 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db199c04-6231-46b3-a4e7-5cd74604b005-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.078722 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db199c04-6231-46b3-a4e7-5cd74604b005-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.084249 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.143214 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtbb\" (UniqueName: \"kubernetes.io/projected/40076fe2-006c-4dc7-ac7c-71fa27c9bb7d-kube-api-access-vbtbb\") pod \"openshift-config-operator-7777fb866f-v24vn\" (UID: \"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.169328 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjwfr\" (UniqueName: \"kubernetes.io/projected/a6d7f1cf-d68c-4658-98b2-e18d8e70edb8-kube-api-access-qjwfr\") pod \"cluster-samples-operator-665b6dd947-s9v5x\" (UID: \"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.170701 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.183526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz27q\" (UniqueName: \"kubernetes.io/projected/c1a96247-d002-4f96-9695-16a4011f3ad5-kube-api-access-kz27q\") pod \"openshift-apiserver-operator-796bbdcf4f-dbzkw\" (UID: \"c1a96247-d002-4f96-9695-16a4011f3ad5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.201229 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.214671 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8n48\" (UniqueName: \"kubernetes.io/projected/43448f45-644f-4b5a-aa06-567b5c8f8279-kube-api-access-l8n48\") pod \"etcd-operator-b45778765-bkbvd\" (UID: \"43448f45-644f-4b5a-aa06-567b5c8f8279\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.229332 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"controller-manager-879f6c89f-k5psf\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.259644 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6kks\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-kube-api-access-r6kks\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.271291 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85zdt\" (UniqueName: \"kubernetes.io/projected/81a5be64-af9a-4376-9105-c36371ad5069-kube-api-access-85zdt\") pod \"apiserver-7bbb656c7d-t5985\" (UID: \"81a5be64-af9a-4376-9105-c36371ad5069\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.288372 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c2lv\" (UniqueName: \"kubernetes.io/projected/92eb7fb7-d1b8-45ad-b8ff-8411d04eb048-kube-api-access-4c2lv\") pod \"downloads-7954f5f757-mgft7\" (UID: \"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048\") " pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.308994 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb62s\" (UniqueName: \"kubernetes.io/projected/15723c66-27d3-4cea-9962-e75bbe7bb967-kube-api-access-nb62s\") pod \"apiserver-76f77b778f-jjt2k\" (UID: \"15723c66-27d3-4cea-9962-e75bbe7bb967\") " pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.317309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.324511 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"oauth-openshift-558db77b4-jtzpg\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.327091 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.353777 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce7607b6-0e74-47ba-8875-057821862224-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-xmh8s\" (UID: \"ce7607b6-0e74-47ba-8875-057821862224\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.368968 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ckx\" (UniqueName: \"kubernetes.io/projected/8c1e55ad-d8f0-4ceb-b929-e4f09903df58-kube-api-access-m6ckx\") pod \"machine-approver-56656f9798-2s5j2\" (UID: \"8c1e55ad-d8f0-4ceb-b929-e4f09903df58\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.375626 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-v24vn"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.385664 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"route-controller-manager-6576b87f9c-8qp45\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.405397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"console-f9d7485db-nwrtw\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.424267 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d9p\" (UniqueName: \"kubernetes.io/projected/52f284ae-bace-4bd8-8140-7f37fbad55d4-kube-api-access-r6d9p\") pod \"authentication-operator-69f744f599-dltl2\" (UID: \"52f284ae-bace-4bd8-8140-7f37fbad55d4\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.425968 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.428858 4769 request.go:700] Waited for 1.952837447s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.428968 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.445962 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwl46\" (UniqueName: \"kubernetes.io/projected/5758b1f6-5135-428d-ad0b-6892a49d1800-kube-api-access-wwl46\") pod \"console-operator-58897d9998-2vm4g\" (UID: \"5758b1f6-5135-428d-ad0b-6892a49d1800\") " pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.461358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.465570 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxw4z\" (UniqueName: \"kubernetes.io/projected/f4e58a9e-ecc8-43de-9518-0b014b2a27d2-kube-api-access-hxw4z\") pod \"machine-api-operator-5694c8668f-65brj\" (UID: \"f4e58a9e-ecc8-43de-9518-0b014b2a27d2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.470702 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.489355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.490427 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.494001 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mgft7"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.497624 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.510238 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.510562 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.518938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.529915 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.533121 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.537804 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bkbvd"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.549485 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.551705 4769 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.570574 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.590510 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.613398 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.617753 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" event={"ID":"43448f45-644f-4b5a-aa06-567b5c8f8279","Type":"ContainerStarted","Data":"4684ed1cfcb96270523a6a8d7bd57101ca77a0e2ffbd8a1ed6db94460013be10"} Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.618777 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mgft7" event={"ID":"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048","Type":"ContainerStarted","Data":"2faa745b588ac7a75553576155a5f95f83d99449a4fa8e63ecfe096f528d750f"} Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.619923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" event={"ID":"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d","Type":"ContainerStarted","Data":"a1034525d97a28712dee57e0fe1cf0efc3208802ef24e184494647e1aacdd31a"} Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.624638 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.628865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.629242 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.630273 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.655709 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.673866 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.680104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrvt\" (UniqueName: \"kubernetes.io/projected/81769776-c586-45a0-a9ed-42ce4789bb28-kube-api-access-cvrvt\") pod \"catalog-operator-68c6474976-q8sxk\" (UID: \"81769776-c586-45a0-a9ed-42ce4789bb28\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.686891 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db199c04-6231-46b3-a4e7-5cd74604b005-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-28gzs\" (UID: \"db199c04-6231-46b3-a4e7-5cd74604b005\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.706743 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2pzt\" (UniqueName: \"kubernetes.io/projected/e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43-kube-api-access-p2pzt\") pod \"package-server-manager-789f6589d5-jr9vm\" (UID: \"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.724532 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjgq6\" (UniqueName: \"kubernetes.io/projected/ba0299e2-1902-461d-bf42-f3d5dfe205ff-kube-api-access-wjgq6\") pod \"multus-admission-controller-857f4d67dd-9mm5p\" (UID: \"ba0299e2-1902-461d-bf42-f3d5dfe205ff\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.744025 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk48n\" (UniqueName: \"kubernetes.io/projected/d8b75cc3-465e-4542-82ee-4950744e89a0-kube-api-access-vk48n\") pod \"dns-operator-744455d44c-ds5qk\" (UID: \"d8b75cc3-465e-4542-82ee-4950744e89a0\") " pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:57 crc kubenswrapper[4769]: W0122 13:45:57.753149 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a96247_d002_4f96_9695_16a4011f3ad5.slice/crio-72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9 WatchSource:0}: Error finding container 72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9: Status 404 returned error can't find the container with id 72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9 Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.768287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4v8\" (UniqueName: \"kubernetes.io/projected/db7a69ec-2a82-4f9b-b83a-42237a02087e-kube-api-access-qb4v8\") pod \"control-plane-machine-set-operator-78cbb6b69f-pzj8w\" (UID: \"db7a69ec-2a82-4f9b-b83a-42237a02087e\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.785706 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz965\" (UniqueName: \"kubernetes.io/projected/5c5cf556-ec03-4f29-94ed-13a58f54275c-kube-api-access-rz965\") pod \"router-default-5444994796-pb7qw\" (UID: \"5c5cf556-ec03-4f29-94ed-13a58f54275c\") " pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.801070 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.809162 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894263 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9e87e73-cad4-48f0-81f9-d636cd123278-metrics-tls\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ddda125-6c9a-4546-901a-a32dd6e99251-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894373 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0335a481-e6c1-459c-8325-5da8dfcbcdb1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894437 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-srv-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894465 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f88820f-4a65-4799-86f7-19be89871165-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894487 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a409b5-e519-4c64-bc56-0b74757f2181-serving-cert\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894527 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f91eb97-e4cc-4a67-9426-7aec499b4485-proxy-tls\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894551 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3640120-a52b-4ee5-aacb-83df135f0470-cert\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894602 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a409b5-e519-4c64-bc56-0b74757f2181-config\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894626 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ddda125-6c9a-4546-901a-a32dd6e99251-config\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894676 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894697 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nl6c\" (UniqueName: \"kubernetes.io/projected/e01e843d-f221-43ed-a309-e21fe298f64f-kube-api-access-8nl6c\") pod \"migrator-59844c95c7-d8wjb\" (UID: \"e01e843d-f221-43ed-a309-e21fe298f64f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894720 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-webhook-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0335a481-e6c1-459c-8325-5da8dfcbcdb1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894765 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894808 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk5bd\" (UniqueName: \"kubernetes.io/projected/0335a481-e6c1-459c-8325-5da8dfcbcdb1-kube-api-access-fk5bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894835 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894861 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-images\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894881 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f88820f-4a65-4799-86f7-19be89871165-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-cabundle\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894924 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8xv\" (UniqueName: \"kubernetes.io/projected/73369200-053d-4d9d-a775-c3cb76119697-kube-api-access-tb8xv\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894944 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f91eb97-e4cc-4a67-9426-7aec499b4485-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894979 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpcbg\" (UniqueName: \"kubernetes.io/projected/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-kube-api-access-tpcbg\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.894997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-key\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.895028 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vr8\" (UniqueName: \"kubernetes.io/projected/10a252bf-8be9-40ee-9632-4abbb989e43d-kube-api-access-77vr8\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.895050 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898449 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898663 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898725 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-apiservice-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898825 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jswq\" (UniqueName: \"kubernetes.io/projected/153c6af8-5ac1-4256-ad20-992ad604c61b-kube-api-access-2jswq\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898912 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898941 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.898986 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxm44\" (UniqueName: \"kubernetes.io/projected/2f88820f-4a65-4799-86f7-19be89871165-kube-api-access-cxm44\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899155 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899180 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899219 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvd2r\" (UniqueName: \"kubernetes.io/projected/e9a409b5-e519-4c64-bc56-0b74757f2181-kube-api-access-dvd2r\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wkzs\" (UniqueName: \"kubernetes.io/projected/3f91eb97-e4cc-4a67-9426-7aec499b4485-kube-api-access-9wkzs\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899326 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ddda125-6c9a-4546-901a-a32dd6e99251-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f89vh\" (UniqueName: \"kubernetes.io/projected/e3640120-a52b-4ee5-aacb-83df135f0470-kube-api-access-f89vh\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899708 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btx8b\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-kube-api-access-btx8b\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899737 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899854 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73369200-053d-4d9d-a775-c3cb76119697-proxy-tls\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899877 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9e87e73-cad4-48f0-81f9-d636cd123278-trusted-ca\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899952 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-config\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.899999 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.900018 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10a252bf-8be9-40ee-9632-4abbb989e43d-tmpfs\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:57 crc kubenswrapper[4769]: E0122 13:45:57.901700 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.401688235 +0000 UTC m=+137.812798164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.944374 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.953014 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.974182 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.977980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s"] Jan 22 13:45:57 crc kubenswrapper[4769]: I0122 13:45:57.984402 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:45:57 crc kubenswrapper[4769]: W0122 13:45:57.992171 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b0fa7ff_24c4_431c_bc35_87f9483d5c70.slice/crio-99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94 WatchSource:0}: Error finding container 99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94: Status 404 returned error can't find the container with id 99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94 Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002642 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002890 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-cabundle\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8xv\" (UniqueName: \"kubernetes.io/projected/73369200-053d-4d9d-a775-c3cb76119697-kube-api-access-tb8xv\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002943 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f91eb97-e4cc-4a67-9426-7aec499b4485-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002964 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpcbg\" (UniqueName: \"kubernetes.io/projected/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-kube-api-access-tpcbg\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.002985 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-key\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vr8\" (UniqueName: \"kubernetes.io/projected/10a252bf-8be9-40ee-9632-4abbb989e43d-kube-api-access-77vr8\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003031 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-plugins-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003058 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003101 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-mountpoint-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003125 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003150 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003177 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-registration-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-apiservice-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003226 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkz49\" (UniqueName: \"kubernetes.io/projected/eed71162-446a-4681-a3a8-23247149532c-kube-api-access-xkz49\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003254 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jswq\" (UniqueName: \"kubernetes.io/projected/153c6af8-5ac1-4256-ad20-992ad604c61b-kube-api-access-2jswq\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003286 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-socket-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003310 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxm44\" (UniqueName: \"kubernetes.io/projected/2f88820f-4a65-4799-86f7-19be89871165-kube-api-access-cxm44\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003375 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-csi-data-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003401 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-certs\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003425 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003451 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003502 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-config-volume\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003526 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvd2r\" (UniqueName: \"kubernetes.io/projected/e9a409b5-e519-4c64-bc56-0b74757f2181-kube-api-access-dvd2r\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003548 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003571 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wkzs\" (UniqueName: \"kubernetes.io/projected/3f91eb97-e4cc-4a67-9426-7aec499b4485-kube-api-access-9wkzs\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003592 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003628 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ddda125-6c9a-4546-901a-a32dd6e99251-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003655 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f89vh\" (UniqueName: \"kubernetes.io/projected/e3640120-a52b-4ee5-aacb-83df135f0470-kube-api-access-f89vh\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003674 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btx8b\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-kube-api-access-btx8b\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003696 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-metrics-tls\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003720 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003746 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73369200-053d-4d9d-a775-c3cb76119697-proxy-tls\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9e87e73-cad4-48f0-81f9-d636cd123278-trusted-ca\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003787 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5d78\" (UniqueName: \"kubernetes.io/projected/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-kube-api-access-b5d78\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-config\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003894 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10a252bf-8be9-40ee-9632-4abbb989e43d-tmpfs\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003917 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9e87e73-cad4-48f0-81f9-d636cd123278-metrics-tls\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.003959 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-node-bootstrap-token\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004018 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0335a481-e6c1-459c-8325-5da8dfcbcdb1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004043 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ddda125-6c9a-4546-901a-a32dd6e99251-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004087 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-srv-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004133 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a409b5-e519-4c64-bc56-0b74757f2181-serving-cert\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004157 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f88820f-4a65-4799-86f7-19be89871165-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f91eb97-e4cc-4a67-9426-7aec499b4485-proxy-tls\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004211 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3640120-a52b-4ee5-aacb-83df135f0470-cert\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004249 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a409b5-e519-4c64-bc56-0b74757f2181-config\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004274 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ddda125-6c9a-4546-901a-a32dd6e99251-config\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004334 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004379 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nl6c\" (UniqueName: \"kubernetes.io/projected/e01e843d-f221-43ed-a309-e21fe298f64f-kube-api-access-8nl6c\") pod \"migrator-59844c95c7-d8wjb\" (UID: \"e01e843d-f221-43ed-a309-e21fe298f64f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004400 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-webhook-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004423 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0335a481-e6c1-459c-8325-5da8dfcbcdb1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk5bd\" (UniqueName: \"kubernetes.io/projected/0335a481-e6c1-459c-8325-5da8dfcbcdb1-kube-api-access-fk5bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004514 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004535 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-images\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004555 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f88820f-4a65-4799-86f7-19be89871165-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.004571 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkj6s\" (UniqueName: \"kubernetes.io/projected/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-kube-api-access-pkj6s\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.007744 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.013069 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.013068 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-cabundle\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.013672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3f91eb97-e4cc-4a67-9426-7aec499b4485-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.013917 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.513883402 +0000 UTC m=+137.924993361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.017725 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.018055 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a9e87e73-cad4-48f0-81f9-d636cd123278-trusted-ca\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.018233 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019181 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73369200-053d-4d9d-a775-c3cb76119697-images\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019411 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f88820f-4a65-4799-86f7-19be89871165-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019725 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ddda125-6c9a-4546-901a-a32dd6e99251-config\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.019732 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10a252bf-8be9-40ee-9632-4abbb989e43d-tmpfs\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.021354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9a409b5-e519-4c64-bc56-0b74757f2181-config\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.022643 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0335a481-e6c1-459c-8325-5da8dfcbcdb1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.022859 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.023184 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.023376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.025166 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.025832 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-config\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.026009 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73369200-053d-4d9d-a775-c3cb76119697-proxy-tls\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.026689 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.030458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.030884 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-profile-collector-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.030897 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.035024 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0335a481-e6c1-459c-8325-5da8dfcbcdb1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.038635 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.039912 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-webhook-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.040318 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.040705 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f88820f-4a65-4799-86f7-19be89871165-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.040945 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ddda125-6c9a-4546-901a-a32dd6e99251-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.041086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10a252bf-8be9-40ee-9632-4abbb989e43d-apiservice-cert\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.042643 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/153c6af8-5ac1-4256-ad20-992ad604c61b-signing-key\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.042716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a9e87e73-cad4-48f0-81f9-d636cd123278-metrics-tls\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.044286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.045538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.053679 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-srv-cert\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.053825 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9a409b5-e519-4c64-bc56-0b74757f2181-serving-cert\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.054592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"marketplace-operator-79b997595-5jwbt\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.057263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.062613 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3f91eb97-e4cc-4a67-9426-7aec499b4485-proxy-tls\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.068606 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.071339 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3640120-a52b-4ee5-aacb-83df135f0470-cert\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.080632 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jjt2k"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.080689 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.087365 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.090117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ddda125-6c9a-4546-901a-a32dd6e99251-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bxgr9\" (UID: \"9ddda125-6c9a-4546-901a-a32dd6e99251\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105371 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5d78\" (UniqueName: \"kubernetes.io/projected/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-kube-api-access-b5d78\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105434 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-node-bootstrap-token\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105518 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkj6s\" (UniqueName: \"kubernetes.io/projected/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-kube-api-access-pkj6s\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105551 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-plugins-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105573 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-mountpoint-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105591 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-registration-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105608 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkz49\" (UniqueName: \"kubernetes.io/projected/eed71162-446a-4681-a3a8-23247149532c-kube-api-access-xkz49\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105631 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-socket-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105650 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105678 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-csi-data-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105693 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-certs\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-config-volume\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.105767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-metrics-tls\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.106444 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-registration-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.108693 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-mountpoint-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.108701 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-socket-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.108816 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-csi-data-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.109163 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.609145594 +0000 UTC m=+138.020255523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.109177 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-plugins-dir\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.109219 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-config-volume\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.110110 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wkzs\" (UniqueName: \"kubernetes.io/projected/3f91eb97-e4cc-4a67-9426-7aec499b4485-kube-api-access-9wkzs\") pod \"machine-config-controller-84d6567774-rcksw\" (UID: \"3f91eb97-e4cc-4a67-9426-7aec499b4485\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.116239 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-metrics-tls\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.120763 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-certs\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.128318 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-bound-sa-token\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.148782 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btx8b\" (UniqueName: \"kubernetes.io/projected/a9e87e73-cad4-48f0-81f9-d636cd123278-kube-api-access-btx8b\") pod \"ingress-operator-5b745b69d9-9z2dj\" (UID: \"a9e87e73-cad4-48f0-81f9-d636cd123278\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.158419 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-65brj"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.168384 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f89vh\" (UniqueName: \"kubernetes.io/projected/e3640120-a52b-4ee5-aacb-83df135f0470-kube-api-access-f89vh\") pod \"ingress-canary-5qtks\" (UID: \"e3640120-a52b-4ee5-aacb-83df135f0470\") " pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.185562 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.207563 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.207969 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.707952163 +0000 UTC m=+138.119062092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.210513 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fbc7f2a-fce4-4747-9a96-1fc4631a6197-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-5lfqv\" (UID: \"1fbc7f2a-fce4-4747-9a96-1fc4631a6197\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.226287 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxm44\" (UniqueName: \"kubernetes.io/projected/2f88820f-4a65-4799-86f7-19be89871165-kube-api-access-cxm44\") pod \"openshift-controller-manager-operator-756b6f6bc6-2s8ds\" (UID: \"2f88820f-4a65-4799-86f7-19be89871165\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.238195 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.243613 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpcbg\" (UniqueName: \"kubernetes.io/projected/7d18d670-f698-4b8c-b6c3-300dc1ed8e46-kube-api-access-tpcbg\") pod \"olm-operator-6b444d44fb-6sgg2\" (UID: \"7d18d670-f698-4b8c-b6c3-300dc1ed8e46\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.245404 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.266274 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vr8\" (UniqueName: \"kubernetes.io/projected/10a252bf-8be9-40ee-9632-4abbb989e43d-kube-api-access-77vr8\") pod \"packageserver-d55dfcdfc-98pt8\" (UID: \"10a252bf-8be9-40ee-9632-4abbb989e43d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.284327 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eed71162-446a-4681-a3a8-23247149532c-node-bootstrap-token\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.284957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8xv\" (UniqueName: \"kubernetes.io/projected/73369200-053d-4d9d-a775-c3cb76119697-kube-api-access-tb8xv\") pod \"machine-config-operator-74547568cd-m5n64\" (UID: \"73369200-053d-4d9d-a775-c3cb76119697\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.289446 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c5cf556_ec03_4f29_94ed_13a58f54275c.slice/crio-9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c WatchSource:0}: Error finding container 9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c: Status 404 returned error can't find the container with id 9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.295614 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2vm4g"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.299446 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dltl2"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.303117 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.312554 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.312569 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvd2r\" (UniqueName: \"kubernetes.io/projected/e9a409b5-e519-4c64-bc56-0b74757f2181-kube-api-access-dvd2r\") pod \"service-ca-operator-777779d784-tv6dp\" (UID: \"e9a409b5-e519-4c64-bc56-0b74757f2181\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.312837 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.313967 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.813948609 +0000 UTC m=+138.225058558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.331026 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52f284ae_bace_4bd8_8140_7f37fbad55d4.slice/crio-15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51 WatchSource:0}: Error finding container 15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51: Status 404 returned error can't find the container with id 15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51 Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.332109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jswq\" (UniqueName: \"kubernetes.io/projected/153c6af8-5ac1-4256-ad20-992ad604c61b-kube-api-access-2jswq\") pod \"service-ca-9c57cc56f-gcpwt\" (UID: \"153c6af8-5ac1-4256-ad20-992ad604c61b\") " pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.338509 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5758b1f6_5135_428d_ad0b_6892a49d1800.slice/crio-20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b WatchSource:0}: Error finding container 20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b: Status 404 returned error can't find the container with id 20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.342828 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk5bd\" (UniqueName: \"kubernetes.io/projected/0335a481-e6c1-459c-8325-5da8dfcbcdb1-kube-api-access-fk5bd\") pod \"kube-storage-version-migrator-operator-b67b599dd-9nmqg\" (UID: \"0335a481-e6c1-459c-8325-5da8dfcbcdb1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.363488 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"collect-profiles-29484825-hgsdh\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.364769 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.383966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nl6c\" (UniqueName: \"kubernetes.io/projected/e01e843d-f221-43ed-a309-e21fe298f64f-kube-api-access-8nl6c\") pod \"migrator-59844c95c7-d8wjb\" (UID: \"e01e843d-f221-43ed-a309-e21fe298f64f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.385744 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.397644 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.414271 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.91424785 +0000 UTC m=+138.325357779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.414300 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.414854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.415237 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:58.915222617 +0000 UTC m=+138.326332546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.416003 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.423093 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.429409 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5d78\" (UniqueName: \"kubernetes.io/projected/6e9c7f00-95b3-4453-8d82-df8b88a2bc8a-kube-api-access-b5d78\") pod \"csi-hostpathplugin-xdxvs\" (UID: \"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a\") " pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.429624 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.431385 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.441432 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5qtks" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.444088 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.445366 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.459499 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkz49\" (UniqueName: \"kubernetes.io/projected/eed71162-446a-4681-a3a8-23247149532c-kube-api-access-xkz49\") pod \"machine-config-server-ggj4q\" (UID: \"eed71162-446a-4681-a3a8-23247149532c\") " pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.485916 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkj6s\" (UniqueName: \"kubernetes.io/projected/bf805bae-0da1-4a8b-a8c8-6c99cf8ce515-kube-api-access-pkj6s\") pod \"dns-default-rkk84\" (UID: \"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515\") " pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.487273 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.492893 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rkk84" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.494117 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mm5p"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.504314 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.516453 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.516930 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.016895624 +0000 UTC m=+138.428005553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.594941 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.597335 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81769776_c586_45a0_a9ed_42ce4789bb28.slice/crio-cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c WatchSource:0}: Error finding container cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c: Status 404 returned error can't find the container with id cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.599768 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ds5qk"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.617901 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.618241 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.118224283 +0000 UTC m=+138.529334212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.624755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" event={"ID":"5758b1f6-5135-428d-ad0b-6892a49d1800","Type":"ContainerStarted","Data":"20dcfd8c2fcd40a2700c438c94e18346739575631d07a620500af1bc89af4e2b"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.625442 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" event={"ID":"db199c04-6231-46b3-a4e7-5cd74604b005","Type":"ContainerStarted","Data":"3dac4c0e616238b9276a10434a08b75a4f898abecefb5d865798f5f9f871c1f7"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.626208 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" event={"ID":"52f284ae-bace-4bd8-8140-7f37fbad55d4","Type":"ContainerStarted","Data":"15f3c4ed22a595247be6516f8fc888ba081818c56bc0263b5edae7183a2a8c51"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.629710 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.630852 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mgft7" event={"ID":"92eb7fb7-d1b8-45ad-b8ff-8411d04eb048","Type":"ContainerStarted","Data":"97a1a62427a3ec2a73662c2575862cfebc5a1a3859d4927655a3699ac711d789"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.631011 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.633189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" event={"ID":"8c1e55ad-d8f0-4ceb-b929-e4f09903df58","Type":"ContainerStarted","Data":"3c6a93df69f7d8a756e66110d110588adfe6fa5f2e4b4ad92ae0ff8ad10e7d7e"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.634605 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.634639 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.639807 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerStarted","Data":"99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.644072 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.644473 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pb7qw" event={"ID":"5c5cf556-ec03-4f29-94ed-13a58f54275c","Type":"ContainerStarted","Data":"9cf0f0e3fec9189b23a4b21c7da103edfe9deb79a563da0e3166056a7089771c"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.649072 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" event={"ID":"ce7607b6-0e74-47ba-8875-057821862224","Type":"ContainerStarted","Data":"5da20624d5c68fcb1a8c77977639b7cf7fea8fff2cff28af01f29f1b37b182e7"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.651268 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" event={"ID":"c1a96247-d002-4f96-9695-16a4011f3ad5","Type":"ContainerStarted","Data":"0c11eb654fab27ccff28103bc5868b950df871a3ff861f98804806ad409a7f1b"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.651294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" event={"ID":"c1a96247-d002-4f96-9695-16a4011f3ad5","Type":"ContainerStarted","Data":"72b0fccf83855e247e0a6c9983b7b2a8640e6a27df7042e164f8fe56bfcb6df9"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.653564 4769 generic.go:334] "Generic (PLEG): container finished" podID="40076fe2-006c-4dc7-ac7c-71fa27c9bb7d" containerID="459a9f471127a040b63915fd86a2c1727c19775edc4779622bf444df59d12b72" exitCode=0 Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.653606 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" event={"ID":"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d","Type":"ContainerDied","Data":"459a9f471127a040b63915fd86a2c1727c19775edc4779622bf444df59d12b72"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.654695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" event={"ID":"81769776-c586-45a0-a9ed-42ce4789bb28","Type":"ContainerStarted","Data":"cec41d2114562c6e7eff84ec57f631899a095a3e5796fdb0ee62aacfdeaf374c"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.655461 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" event={"ID":"f4e58a9e-ecc8-43de-9518-0b014b2a27d2","Type":"ContainerStarted","Data":"9557a5ff3f6a65fcc1117417184e0b6084b41c770702d1372de880df0dade92d"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.656228 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" event={"ID":"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8","Type":"ContainerStarted","Data":"205c5845b5f3bc2b5a7a4133454743ada342c3b43673454d4739b7eb2ee66954"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.656917 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerStarted","Data":"261bd1091a2577bc464771e7c33703e0f325865e92a22082bfb502ff9ac9d6f2"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.657505 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerStarted","Data":"b2e183a2748638f6147b4875fa0815521584060feb7b408b9c34ad657edc5a60"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.659867 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" event={"ID":"81a5be64-af9a-4376-9105-c36371ad5069","Type":"ContainerStarted","Data":"f89c3a362197841c752ab5f3edfa4041e9b516f25543b045b655daf2d5510368"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.660781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" event={"ID":"ba0299e2-1902-461d-bf42-f3d5dfe205ff","Type":"ContainerStarted","Data":"78f838c57c348d24f96e78f988e702c61f7ee98211b60bd96d672316dfde3ae1"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.661555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" event={"ID":"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43","Type":"ContainerStarted","Data":"eff4648ecb9b16184b1c776dfeea1941a88a960abd565b6c43c161ec06e71187"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.662394 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerStarted","Data":"8a4ca8e6f7f24168e7b28e169244f2171fb54980af290f9158d1ed973b3b78f4"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.663705 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerStarted","Data":"ecd96351628bb1d50b55482cf0c3518a0cdf7cafe69577c7b0d90695bd293ec5"} Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.718903 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.719041 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.219024076 +0000 UTC m=+138.630134005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.719656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.720834 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.220811285 +0000 UTC m=+138.631921214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.751854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ggj4q" Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.776235 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d18d670_f698_4b8c_b6c3_300dc1ed8e46.slice/crio-26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7 WatchSource:0}: Error finding container 26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7: Status 404 returned error can't find the container with id 26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7 Jan 22 13:45:58 crc kubenswrapper[4769]: W0122 13:45:58.776739 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8b75cc3_465e_4542_82ee_4950744e89a0.slice/crio-53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe WatchSource:0}: Error finding container 53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe: Status 404 returned error can't find the container with id 53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.792308 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.820650 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.820999 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.320980641 +0000 UTC m=+138.732090570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.921973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:58 crc kubenswrapper[4769]: E0122 13:45:58.922759 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.422743401 +0000 UTC m=+138.833853330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.924933 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.924968 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj"] Jan 22 13:45:58 crc kubenswrapper[4769]: I0122 13:45:58.924980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.026355 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.026844 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.526827045 +0000 UTC m=+138.937936974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.031278 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.133996 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.134886 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.634873599 +0000 UTC m=+139.045983528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.235431 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.235827 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.735783976 +0000 UTC m=+139.146893905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.328160 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.345987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.346314 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.846300057 +0000 UTC m=+139.257409986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.422826 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-gcpwt"] Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.447892 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.448094 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.948069407 +0000 UTC m=+139.359179336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.448186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.448501 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:45:59.948484999 +0000 UTC m=+139.359594928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.549130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.550467 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.050449064 +0000 UTC m=+139.461558993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.561647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.562457 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.062440154 +0000 UTC m=+139.473550083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.665934 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.666268 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.166254461 +0000 UTC m=+139.577364380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.749173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" event={"ID":"8c1e55ad-d8f0-4ceb-b929-e4f09903df58","Type":"ContainerStarted","Data":"509f5511eb5e1404c2cd76e0c51c68ffc6dabc6c95a6aa3ff66e728a8b25495c"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.755245 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" event={"ID":"73369200-053d-4d9d-a775-c3cb76119697","Type":"ContainerStarted","Data":"bec64279395a6d602d01ad63df5c1e5e8eced06e90e4c03ac8f551be10f43226"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.766905 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.767346 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.267333162 +0000 UTC m=+139.678443091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.771638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" event={"ID":"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8","Type":"ContainerStarted","Data":"453cb9ed2a92ecaf90bedbb493b80ff3312c834885f3d4755fc067c4850f3079"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.771752 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dbzkw" podStartSLOduration=120.771738323 podStartE2EDuration="2m0.771738323s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:59.766276014 +0000 UTC m=+139.177385963" watchObservedRunningTime="2026-01-22 13:45:59.771738323 +0000 UTC m=+139.182848252" Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.774342 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ggj4q" event={"ID":"eed71162-446a-4681-a3a8-23247149532c","Type":"ContainerStarted","Data":"d938ce0bb72b2efdb480ab7e0796f80b8ac474cf537d2f8f3ef5b60cbdb8cb24"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.777016 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" event={"ID":"43448f45-644f-4b5a-aa06-567b5c8f8279","Type":"ContainerStarted","Data":"5eeedc28e52cdba16a36873cad58d79a10aa01c7ed135179a7685905d0788436"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.815194 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" event={"ID":"9ddda125-6c9a-4546-901a-a32dd6e99251","Type":"ContainerStarted","Data":"5ab0d3cd3aa8ae56bdc7febada89aec58f1ae7ffcadd3dac76a290873e9339bc"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.868364 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" event={"ID":"5758b1f6-5135-428d-ad0b-6892a49d1800","Type":"ContainerStarted","Data":"61eff18189b6c9a1bd08ccc0a7ab9b189d05340bdea3984317c2adc4a1aa747e"} Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.869739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.870389 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.872166 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.372127366 +0000 UTC m=+139.783237295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.878076 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:45:59 crc kubenswrapper[4769]: E0122 13:45:59.887057 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.387034546 +0000 UTC m=+139.798144475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.887885 4769 patch_prober.go:28] interesting pod/console-operator-58897d9998-2vm4g container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.887976 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" podUID="5758b1f6-5135-428d-ad0b-6892a49d1800" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 22 13:45:59 crc kubenswrapper[4769]: I0122 13:45:59.996427 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:45:59.997821 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.497782744 +0000 UTC m=+139.908892673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:45:59.999673 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mgft7" podStartSLOduration=119.999653905 podStartE2EDuration="1m59.999653905s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:59.945750352 +0000 UTC m=+139.356860291" watchObservedRunningTime="2026-01-22 13:45:59.999653905 +0000 UTC m=+139.410763834" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.009466 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" podStartSLOduration=120.009444525 podStartE2EDuration="2m0.009444525s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:45:59.99910647 +0000 UTC m=+139.410216409" watchObservedRunningTime="2026-01-22 13:46:00.009444525 +0000 UTC m=+139.420554454" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.017543 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5qtks"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.027327 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerStarted","Data":"c437a788f729ec1c74235c0c86ed4e15424a790ae709346c3620566dfd2a5bb2"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.030455 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-xdxvs"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.044257 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bkbvd" podStartSLOduration=120.044239812 podStartE2EDuration="2m0.044239812s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.042611747 +0000 UTC m=+139.453721686" watchObservedRunningTime="2026-01-22 13:46:00.044239812 +0000 UTC m=+139.455349741" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.063593 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerStarted","Data":"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.064637 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.097433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.097827 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.597815186 +0000 UTC m=+140.008925115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: W0122 13:46:00.146974 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3640120_a52b_4ee5_aacb_83df135f0470.slice/crio-aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449 WatchSource:0}: Error finding container aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449: Status 404 returned error can't find the container with id aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449 Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.147461 4769 generic.go:334] "Generic (PLEG): container finished" podID="81a5be64-af9a-4376-9105-c36371ad5069" containerID="4afe25e720ceb6da4ecc630fdedcd4ab4b8cac879f3f07359c5cf335ae32aa65" exitCode=0 Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.147566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" event={"ID":"81a5be64-af9a-4376-9105-c36371ad5069","Type":"ContainerDied","Data":"4afe25e720ceb6da4ecc630fdedcd4ab4b8cac879f3f07359c5cf335ae32aa65"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.198832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.200044 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.700028169 +0000 UTC m=+140.111138098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.222249 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" podStartSLOduration=121.22222894 podStartE2EDuration="2m1.22222894s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.147665668 +0000 UTC m=+139.558775597" watchObservedRunningTime="2026-01-22 13:46:00.22222894 +0000 UTC m=+139.633338869" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.239777 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" event={"ID":"f4e58a9e-ecc8-43de-9518-0b014b2a27d2","Type":"ContainerStarted","Data":"88d2dabc1f7f8d4e6bab567d6454ab8cf35439d88628883475f54f7bea23bfa6"} Jan 22 13:46:00 crc kubenswrapper[4769]: W0122 13:46:00.243902 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9c7f00_95b3_4453_8d82_df8b88a2bc8a.slice/crio-fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea WatchSource:0}: Error finding container fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea: Status 404 returned error can't find the container with id fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.250070 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" event={"ID":"7d18d670-f698-4b8c-b6c3-300dc1ed8e46","Type":"ContainerStarted","Data":"26d893b43e058c6203160cba9c74767ed4aa3dddc64d8e87a697a55d51779bb7"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.316073 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerStarted","Data":"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.317440 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.321949 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.322274 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.822261853 +0000 UTC m=+140.233371782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.339315 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.348859 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.395398 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pb7qw" event={"ID":"5c5cf556-ec03-4f29-94ed-13a58f54275c","Type":"ContainerStarted","Data":"83747314671fa6f7c1a40e183a9a83e1df752bb3f15a71c3441472c55ff2deb5"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.423982 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerStarted","Data":"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.424814 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" podStartSLOduration=120.424783964 podStartE2EDuration="2m0.424783964s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.355386224 +0000 UTC m=+139.766496153" watchObservedRunningTime="2026-01-22 13:46:00.424783964 +0000 UTC m=+139.835893893" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.425010 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.426598 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:00.926574423 +0000 UTC m=+140.337684352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.452404 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.460848 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.473623 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.481985 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" event={"ID":"2f88820f-4a65-4799-86f7-19be89871165","Type":"ContainerStarted","Data":"28a643e809f090fe88ab01fc428a29d61e7801c79ddd459639f8aa0d1379afd2"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.492483 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-nwrtw" podStartSLOduration=120.492466016 podStartE2EDuration="2m0.492466016s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.481739491 +0000 UTC m=+139.892849420" watchObservedRunningTime="2026-01-22 13:46:00.492466016 +0000 UTC m=+139.903575945" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.514785 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pb7qw" podStartSLOduration=120.514765799 podStartE2EDuration="2m0.514765799s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.513447363 +0000 UTC m=+139.924557292" watchObservedRunningTime="2026-01-22 13:46:00.514765799 +0000 UTC m=+139.925875728" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.532857 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.535060 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.035042278 +0000 UTC m=+140.446152217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.561623 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rkk84"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.613559 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerStarted","Data":"2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.614561 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.625623 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:46:00 crc kubenswrapper[4769]: W0122 13:46:00.625711 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fbc7f2a_fce4_4747_9a96_1fc4631a6197.slice/crio-feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b WatchSource:0}: Error finding container feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b: Status 404 returned error can't find the container with id feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.641477 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.646044 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.646595 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.146578606 +0000 UTC m=+140.557688535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.661391 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" event={"ID":"db7a69ec-2a82-4f9b-b83a-42237a02087e","Type":"ContainerStarted","Data":"41bdbc90f71424027e07662dd5bcb107d909154091d0ce9d7b455121ca3b97d2"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.688034 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" event={"ID":"d8b75cc3-465e-4542-82ee-4950744e89a0","Type":"ContainerStarted","Data":"53d991a579fab4e79b7d24b5b9174ffcd82d7306f3e8601d4468013ccaecb4fe"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.696901 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" podStartSLOduration=120.69687863 podStartE2EDuration="2m0.69687863s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.684428118 +0000 UTC m=+140.095538057" watchObservedRunningTime="2026-01-22 13:46:00.69687863 +0000 UTC m=+140.107988559" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.735482 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" event={"ID":"ce7607b6-0e74-47ba-8875-057821862224","Type":"ContainerStarted","Data":"eea67f3441e94075454fb0c7d3a96d5408a510a886b13c7d5270615e57e2b2ea"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.751536 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.753191 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.253175539 +0000 UTC m=+140.664285468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.765052 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" podStartSLOduration=120.765033976 podStartE2EDuration="2m0.765033976s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.764686757 +0000 UTC m=+140.175796686" watchObservedRunningTime="2026-01-22 13:46:00.765033976 +0000 UTC m=+140.176143905" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.775321 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.784832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" event={"ID":"a9e87e73-cad4-48f0-81f9-d636cd123278","Type":"ContainerStarted","Data":"c411f7d050d1c510d6717211a8877f15f6ed19c31db25f69102617eb577b294f"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.824578 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp"] Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.829492 4769 generic.go:334] "Generic (PLEG): container finished" podID="15723c66-27d3-4cea-9962-e75bbe7bb967" containerID="15f6c90aff91cd7860e436fa3cbf2c39646fed4974a607821e1a18f1fb00afb3" exitCode=0 Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.829614 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerDied","Data":"15f6c90aff91cd7860e436fa3cbf2c39646fed4974a607821e1a18f1fb00afb3"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.844979 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.852771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.854102 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.354076697 +0000 UTC m=+140.765186636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.913329 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.913702 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.914911 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-xmh8s" podStartSLOduration=120.91489613 podStartE2EDuration="2m0.91489613s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.878413656 +0000 UTC m=+140.289523585" watchObservedRunningTime="2026-01-22 13:46:00.91489613 +0000 UTC m=+140.326006059" Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.922947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" event={"ID":"52f284ae-bace-4bd8-8140-7f37fbad55d4","Type":"ContainerStarted","Data":"e8ead6bae50748969fc2453d09ac55f1d0078c3154caa7084217fea93504125c"} Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.956738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:00 crc kubenswrapper[4769]: E0122 13:46:00.958526 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.45851434 +0000 UTC m=+140.869624269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:00 crc kubenswrapper[4769]: I0122 13:46:00.987230 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dltl2" podStartSLOduration=121.9872135 podStartE2EDuration="2m1.9872135s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:00.985121042 +0000 UTC m=+140.396230961" watchObservedRunningTime="2026-01-22 13:46:00.9872135 +0000 UTC m=+140.398323419" Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.001634 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:01 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.001682 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.006928 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:01 crc kubenswrapper[4769]: W0122 13:46:01.034306 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a409b5_e519_4c64_bc56_0b74757f2181.slice/crio-71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8 WatchSource:0}: Error finding container 71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8: Status 404 returned error can't find the container with id 71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8 Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.058070 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.059845 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.559772116 +0000 UTC m=+140.970882045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.167615 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.168096 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.668076867 +0000 UTC m=+141.079186796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.271141 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.272449 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.772433008 +0000 UTC m=+141.183542927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.375864 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.376718 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.876704598 +0000 UTC m=+141.287814527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.480324 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.480691 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:01.980677069 +0000 UTC m=+141.391786998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.582743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.583539 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.083522338 +0000 UTC m=+141.494632267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.683746 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.684858 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.184825246 +0000 UTC m=+141.595935175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.787067 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.787534 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.287522342 +0000 UTC m=+141.698632271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.887953 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.888209 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.388195482 +0000 UTC m=+141.799305411 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.984495 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qtks" event={"ID":"e3640120-a52b-4ee5-aacb-83df135f0470","Type":"ContainerStarted","Data":"6a83b639b8e7281d2f38a8a13bd2d8cd0b3009fbfbe3619a1b641f3078427312"} Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.984547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5qtks" event={"ID":"e3640120-a52b-4ee5-aacb-83df135f0470","Type":"ContainerStarted","Data":"aa78fcb70486bc019c691076a7adb1dcd9245d05aeda3480b5b7ef4fdce04449"} Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.988663 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:01 crc kubenswrapper[4769]: E0122 13:46:01.989030 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.489016637 +0000 UTC m=+141.900126566 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.999282 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:01 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:01 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:01 crc kubenswrapper[4769]: I0122 13:46:01.999330 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.016830 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" event={"ID":"3f91eb97-e4cc-4a67-9426-7aec499b4485","Type":"ContainerStarted","Data":"a01d6813d3fe5033788bcf79e424d8c24edd16d648cb0d91df6cac2ad7e87721"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.033842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" event={"ID":"f4e58a9e-ecc8-43de-9518-0b014b2a27d2","Type":"ContainerStarted","Data":"edf774cad918d0c903e63356c9349f3c9982e1a39088a1428250a648b8d006ca"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.035632 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5qtks" podStartSLOduration=7.035613258 podStartE2EDuration="7.035613258s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.034132428 +0000 UTC m=+141.445242357" watchObservedRunningTime="2026-01-22 13:46:02.035613258 +0000 UTC m=+141.446723187" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.067688 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" event={"ID":"7d18d670-f698-4b8c-b6c3-300dc1ed8e46","Type":"ContainerStarted","Data":"50387bd8f7a7a56be6825d4bf66c471d92b09523500cbb6eeae67922844fcff8"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.069840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.071080 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-65brj" podStartSLOduration=122.071064205 podStartE2EDuration="2m2.071064205s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.069318986 +0000 UTC m=+141.480428925" watchObservedRunningTime="2026-01-22 13:46:02.071064205 +0000 UTC m=+141.482174134" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.095735 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.096250 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.096659 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.596645888 +0000 UTC m=+142.007755817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.097854 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" event={"ID":"0335a481-e6c1-459c-8325-5da8dfcbcdb1","Type":"ContainerStarted","Data":"1312b26fad537147167a9183728704277b377f20b6e7d69dec973c8bdfb320c3"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.125577 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" event={"ID":"a6d7f1cf-d68c-4658-98b2-e18d8e70edb8","Type":"ContainerStarted","Data":"4a1bb7cc56593bc750b8f9678ba5779bcfaf3e70fccaf589fdf7d1db5a9ec23a"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.151515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" event={"ID":"d8b75cc3-465e-4542-82ee-4950744e89a0","Type":"ContainerStarted","Data":"0d1cd2b147b83a98349400c6c23230d428ea76258737f8db5d5de0ced500e378"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.154302 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-6sgg2" podStartSLOduration=122.154290074 podStartE2EDuration="2m2.154290074s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.122547541 +0000 UTC m=+141.533657480" watchObservedRunningTime="2026-01-22 13:46:02.154290074 +0000 UTC m=+141.565400003" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.169392 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkk84" event={"ID":"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515","Type":"ContainerStarted","Data":"dfe32d4afb4e757cc4ea729d697a684a0cdcb0a6f0a9f678263b28d7d9d302e7"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.188490 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-s9v5x" podStartSLOduration=122.188474936 podStartE2EDuration="2m2.188474936s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.187843518 +0000 UTC m=+141.598953457" watchObservedRunningTime="2026-01-22 13:46:02.188474936 +0000 UTC m=+141.599584865" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.197653 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.199244 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.699228701 +0000 UTC m=+142.110338630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.229865 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" event={"ID":"73369200-053d-4d9d-a775-c3cb76119697","Type":"ContainerStarted","Data":"fd9612b476f5c956cf00dfe340da5ee61e6237647e1343b0c7cb59eac9b9cf95"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.255090 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" event={"ID":"10a252bf-8be9-40ee-9632-4abbb989e43d","Type":"ContainerStarted","Data":"bd846bbe321a9c4a59f95e0d2f83926c0b9add9d4e63dd1548a328273a6a4325"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.256182 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.258061 4769 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-98pt8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.258111 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" podUID="10a252bf-8be9-40ee-9632-4abbb989e43d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.275578 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" event={"ID":"9ddda125-6c9a-4546-901a-a32dd6e99251","Type":"ContainerStarted","Data":"3fe7ed6bf9a7623cc09d67823716b944198862d8419f9034e99517884a922e59"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.291372 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"fe8991de8a579d3543f2e45b57db67c0f93dba16a0a123b244efbb6f989087ea"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.293009 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" podStartSLOduration=122.293000591 podStartE2EDuration="2m2.293000591s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.290941325 +0000 UTC m=+141.702051254" watchObservedRunningTime="2026-01-22 13:46:02.293000591 +0000 UTC m=+141.704110520" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.299593 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerStarted","Data":"e652943776f78a5fd95ced60a7e853ebc62ea8a256a4dea93d8512bf63d1796f"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.299645 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerStarted","Data":"b5f0b3f3f7b7a0b35bdff04091a4f43dc2a4d7a638db51c8e64ac5ca77fff8bf"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.300519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.301548 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.801523016 +0000 UTC m=+142.212632945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.343707 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" event={"ID":"2f88820f-4a65-4799-86f7-19be89871165","Type":"ContainerStarted","Data":"b6358b93440dba2ed8bbc2419f31a54a25b390f612f05e88cc62cf454c483e9b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.343932 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bxgr9" podStartSLOduration=122.343922153 podStartE2EDuration="2m2.343922153s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.341055154 +0000 UTC m=+141.752165083" watchObservedRunningTime="2026-01-22 13:46:02.343922153 +0000 UTC m=+141.755032072" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.373140 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" event={"ID":"40076fe2-006c-4dc7-ac7c-71fa27c9bb7d","Type":"ContainerStarted","Data":"c60747a367f969aba8431d1264c3b06d853ad4743d25e5c6f5da73610a6a897d"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.374926 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.379875 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.381449 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.397609 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.401507 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" podStartSLOduration=62.401487847 podStartE2EDuration="1m2.401487847s" podCreationTimestamp="2026-01-22 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.384669854 +0000 UTC m=+141.795779793" watchObservedRunningTime="2026-01-22 13:46:02.401487847 +0000 UTC m=+141.812597776" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.423937 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.424225 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:02.924213632 +0000 UTC m=+142.335323561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.431690 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.432628 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" event={"ID":"8c1e55ad-d8f0-4ceb-b929-e4f09903df58","Type":"ContainerStarted","Data":"3590631e56908a9ec4b152769bca64d1042fd53ef303711e9fab3815b0bc646b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.438395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" event={"ID":"ba0299e2-1902-461d-bf42-f3d5dfe205ff","Type":"ContainerStarted","Data":"4b585d21514770d3f9c2306b537095fb371b963944e17fb1c137e5b0bd19f513"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.439283 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" event={"ID":"e01e843d-f221-43ed-a309-e21fe298f64f","Type":"ContainerStarted","Data":"3d38ffa8eb97c9b33acb30873648d0ba5c2c602e82463423c16aac06152bf76f"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.439719 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2s8ds" podStartSLOduration=122.439698998 podStartE2EDuration="2m2.439698998s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.428047828 +0000 UTC m=+141.839157757" watchObservedRunningTime="2026-01-22 13:46:02.439698998 +0000 UTC m=+141.850808927" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.447998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerStarted","Data":"63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.450154 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.451872 4769 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5jwbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" start-of-body= Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.451923 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": dial tcp 10.217.0.34:8080: connect: connection refused" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.469861 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ggj4q" event={"ID":"eed71162-446a-4681-a3a8-23247149532c","Type":"ContainerStarted","Data":"6f6986368046f4813dd2f28239dc1bb2b0290e5232a2447970e0a6898b2a4cdd"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.492441 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" event={"ID":"e9a409b5-e519-4c64-bc56-0b74757f2181","Type":"ContainerStarted","Data":"71d39971d170735b8c8c23ba28563a7279d991261ca0796ba01c898d1d545fa8"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.508389 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pzj8w" event={"ID":"db7a69ec-2a82-4f9b-b83a-42237a02087e","Type":"ContainerStarted","Data":"a7af6a04e7cd4d5c52b9ee75410182cb2ee12111f77496e96fb1c9b65cf071ec"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.519581 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" event={"ID":"a9e87e73-cad4-48f0-81f9-d636cd123278","Type":"ContainerStarted","Data":"2df7dfe6b8cd6f8ac6ce3bca874c3990aabcfba5a1e0b6f7e4e698bc7ac687ef"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.519638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" event={"ID":"a9e87e73-cad4-48f0-81f9-d636cd123278","Type":"ContainerStarted","Data":"e2430dba990f43198415f44fd75c50ccb8307e6c6829cb533d4dbeef52eef739"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.521319 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" podStartSLOduration=122.521302884 podStartE2EDuration="2m2.521302884s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.51934831 +0000 UTC m=+141.930458239" watchObservedRunningTime="2026-01-22 13:46:02.521302884 +0000 UTC m=+141.932412803" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525490 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525733 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.525905 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.526021 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.026002723 +0000 UTC m=+142.437112652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.539538 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" event={"ID":"db199c04-6231-46b3-a4e7-5cd74604b005","Type":"ContainerStarted","Data":"1783eb565dba674e2215e780ddfb9a85c4591980102182b29cf78e91f7baeb4b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.548762 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" podStartSLOduration=122.548745139 podStartE2EDuration="2m2.548745139s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.548218124 +0000 UTC m=+141.959328073" watchObservedRunningTime="2026-01-22 13:46:02.548745139 +0000 UTC m=+141.959855078" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.562869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" event={"ID":"153c6af8-5ac1-4256-ad20-992ad604c61b","Type":"ContainerStarted","Data":"4787a7edab73af9b0c9225ffaebb8d363b75f78cc5c3797c1b9178c04d12396b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.562911 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" event={"ID":"153c6af8-5ac1-4256-ad20-992ad604c61b","Type":"ContainerStarted","Data":"5874e352d894d8b1ca7ab3f6d108eb339801a41c4b601856bf6a2acb1cfda348"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.566378 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.570944 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.571026 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.572381 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.591889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" event={"ID":"81769776-c586-45a0-a9ed-42ce4789bb28","Type":"ContainerStarted","Data":"fda6bd1004f0814284e3117f625ff12c08d18ec3c6e15a79839178425b5b3107"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.593294 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.622180 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" event={"ID":"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43","Type":"ContainerStarted","Data":"7a1b23df60e71e322b4eb4aaade21c96a3fb3ac691e403b084152b410f28c70a"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.622226 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" event={"ID":"e7c7c3d4-58d6-4bd2-a85c-7b933bb20d43","Type":"ContainerStarted","Data":"d4b75c72c9393a99da0514694605c5e1c9e01efa513b9e7993959024dc8d095e"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.622269 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.623079 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ggj4q" podStartSLOduration=7.623061174 podStartE2EDuration="7.623061174s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.593189192 +0000 UTC m=+142.004299111" watchObservedRunningTime="2026-01-22 13:46:02.623061174 +0000 UTC m=+142.034171103" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.624140 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627600 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627784 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.627845 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.630625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.632712 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.132697069 +0000 UTC m=+142.543806998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.633303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.656368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" event={"ID":"1fbc7f2a-fce4-4747-9a96-1fc4631a6197","Type":"ContainerStarted","Data":"feb7629d8f114fe6483974b25cf3b1820b5dd34a46c66b845e4d676f51cc766b"} Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.658083 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"certified-operators-7wh4n\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.663601 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2vm4g" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.681581 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" podStartSLOduration=122.681560944 podStartE2EDuration="2m2.681560944s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.681168702 +0000 UTC m=+142.092278641" watchObservedRunningTime="2026-01-22 13:46:02.681560944 +0000 UTC m=+142.092670893" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.685161 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2s5j2" podStartSLOduration=123.685152552 podStartE2EDuration="2m3.685152552s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.624147794 +0000 UTC m=+142.035257723" watchObservedRunningTime="2026-01-22 13:46:02.685152552 +0000 UTC m=+142.096262491" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.729691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.730036 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.730091 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.730112 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.731134 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.231114507 +0000 UTC m=+142.642224436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.743708 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-gcpwt" podStartSLOduration=122.743691483 podStartE2EDuration="2m2.743691483s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.703513307 +0000 UTC m=+142.114623236" watchObservedRunningTime="2026-01-22 13:46:02.743691483 +0000 UTC m=+142.154801412" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.745193 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q8sxk" podStartSLOduration=122.745188284 podStartE2EDuration="2m2.745188284s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.741919865 +0000 UTC m=+142.153029794" watchObservedRunningTime="2026-01-22 13:46:02.745188284 +0000 UTC m=+142.156298213" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.769749 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.771018 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.776751 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" podStartSLOduration=122.776733593 podStartE2EDuration="2m2.776733593s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.77518272 +0000 UTC m=+142.186292659" watchObservedRunningTime="2026-01-22 13:46:02.776733593 +0000 UTC m=+142.187843522" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.820705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.841522 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.842221 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.842378 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.842437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.843043 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.343028707 +0000 UTC m=+142.754138636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.873253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.874082 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.930230 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-28gzs" podStartSLOduration=122.930207905 podStartE2EDuration="2m2.930207905s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.919257334 +0000 UTC m=+142.330367263" watchObservedRunningTime="2026-01-22 13:46:02.930207905 +0000 UTC m=+142.341317834" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.939695 4769 csr.go:261] certificate signing request csr-gj856 is approved, waiting to be issued Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.939842 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"community-operators-lxbp4\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.958872 4769 csr.go:257] certificate signing request csr-gj856 is issued Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.959305 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960088 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960221 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960251 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:02 crc kubenswrapper[4769]: E0122 13:46:02.960412 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.460399317 +0000 UTC m=+142.871509246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.960688 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.979019 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-9z2dj" podStartSLOduration=122.978999008 podStartE2EDuration="2m2.978999008s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:02.96815626 +0000 UTC m=+142.379266189" watchObservedRunningTime="2026-01-22 13:46:02.978999008 +0000 UTC m=+142.390108927" Jan 22 13:46:02 crc kubenswrapper[4769]: I0122 13:46:02.999099 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.006083 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" podStartSLOduration=123.006066513 podStartE2EDuration="2m3.006066513s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:03.002616068 +0000 UTC m=+142.413725997" watchObservedRunningTime="2026-01-22 13:46:03.006066513 +0000 UTC m=+142.417176442" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.006864 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:03 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.006937 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.033161 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.055780 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.061768 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062211 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062282 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062357 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062495 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.062848 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.063654 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.064159 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.564147571 +0000 UTC m=+142.975257500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.064818 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.100984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"certified-operators-2ks9m\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166438 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166780 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166832 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.166868 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.167223 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.667193867 +0000 UTC m=+143.078303796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.167406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.167840 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.184256 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-v24vn" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.204354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"community-operators-5rnmz\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.269750 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.270082 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.770069248 +0000 UTC m=+143.181179177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.370703 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.371116 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.871096708 +0000 UTC m=+143.282206637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.372739 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.385859 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.476213 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.477717 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:03.977700661 +0000 UTC m=+143.388810590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.585289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.585399 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.085380194 +0000 UTC m=+143.496490133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.585558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.585984 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.085976471 +0000 UTC m=+143.497086400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.609783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.689331 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.689709 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.189695505 +0000 UTC m=+143.600805434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.724745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerStarted","Data":"ca83742f3ffbd2cbede8c2894a0b9fa6eb0b873be05c34d082d77e936acb6ff4"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.736173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"cc790897f2c03cd237b709a253cb8feb60b3f8c8e7eec02f6850961c5370fd8c"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.760078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tv6dp" event={"ID":"e9a409b5-e519-4c64-bc56-0b74757f2181","Type":"ContainerStarted","Data":"9e3cb59eace57b4102e496545a698e917cd834c618e16e3081ae1ebd33ad7120"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.784319 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.790484 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.790555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" event={"ID":"73369200-053d-4d9d-a775-c3cb76119697","Type":"ContainerStarted","Data":"382ca9326aade27f0aab2053e02cd05727dacd8574e389c25ff24d8ec2837257"} Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.790849 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.290778316 +0000 UTC m=+143.701888245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: W0122 13:46:03.813414 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d9e80ce_c46e_4a99_814e_0d9b1b65623f.slice/crio-87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc WatchSource:0}: Error finding container 87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc: Status 404 returned error can't find the container with id 87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.827689 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m5n64" podStartSLOduration=123.827671581 podStartE2EDuration="2m3.827671581s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:03.826970192 +0000 UTC m=+143.238080121" watchObservedRunningTime="2026-01-22 13:46:03.827671581 +0000 UTC m=+143.238781510" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.838137 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" event={"ID":"e01e843d-f221-43ed-a309-e21fe298f64f","Type":"ContainerStarted","Data":"5df2bcc88c3539be053859b0ed4af5a02ecc6750223637d5feb1f5a2787fbabb"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.838173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" event={"ID":"e01e843d-f221-43ed-a309-e21fe298f64f","Type":"ContainerStarted","Data":"045108c3c52dc6747143cb77f27616fa92adce96e5936b3f02feae3c1494b215"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.883430 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkk84" event={"ID":"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515","Type":"ContainerStarted","Data":"01a3f246379213d65b9a158fbe47e4a4c4c2be6de6c3bcf62110d9d310295640"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.893587 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.895190 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.395168579 +0000 UTC m=+143.806278498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.977749 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 13:41:02 +0000 UTC, rotation deadline is 2026-10-12 20:04:18.388215761 +0000 UTC Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.978076 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6318h18m14.410143151s for next certificate rotation Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.985967 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" event={"ID":"10a252bf-8be9-40ee-9632-4abbb989e43d","Type":"ContainerStarted","Data":"629d0b002a1282938d8599528a95f65e462ecb85cc83338d24c0a454c4c4b054"} Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.988851 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-d8wjb" podStartSLOduration=123.988827886 podStartE2EDuration="2m3.988827886s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:03.883814476 +0000 UTC m=+143.294924415" watchObservedRunningTime="2026-01-22 13:46:03.988827886 +0000 UTC m=+143.399937815" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.990829 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.994004 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:03 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:03 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.994042 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:03 crc kubenswrapper[4769]: I0122 13:46:03.997003 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:03 crc kubenswrapper[4769]: E0122 13:46:03.997300 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.497288508 +0000 UTC m=+143.908398437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.019044 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-98pt8" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.082456 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.098507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.100097 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.600074257 +0000 UTC m=+144.011184186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.108695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" event={"ID":"d8b75cc3-465e-4542-82ee-4950744e89a0","Type":"ContainerStarted","Data":"a6f1bffc6f3dc034901807a0acbe99c9f655397cff040c51f93bb72fd120f61b"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.120205 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.121559 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.621547618 +0000 UTC m=+144.032657537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.123529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" event={"ID":"81a5be64-af9a-4376-9105-c36371ad5069","Type":"ContainerStarted","Data":"01c918f6a922286d54e6f0f6dd759a743d396e8baa23df0990ec2306e48769b5"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.147106 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ds5qk" podStartSLOduration=124.14708186 podStartE2EDuration="2m4.14708186s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.135848271 +0000 UTC m=+143.546958200" watchObservedRunningTime="2026-01-22 13:46:04.14708186 +0000 UTC m=+143.558191779" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.165954 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" podStartSLOduration=124.1659314 podStartE2EDuration="2m4.1659314s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.165069155 +0000 UTC m=+143.576179084" watchObservedRunningTime="2026-01-22 13:46:04.1659314 +0000 UTC m=+143.577041349" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.203040 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-5lfqv" event={"ID":"1fbc7f2a-fce4-4747-9a96-1fc4631a6197","Type":"ContainerStarted","Data":"4704dec99621c6846e3261de6e333b789fa362c283134a1fab3ae7c38e0c05b3"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.221625 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.222590 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" event={"ID":"ba0299e2-1902-461d-bf42-f3d5dfe205ff","Type":"ContainerStarted","Data":"1ca238b1d6d0149a30a8e8311d14f69bde547eeb40d5660b4b7dd4e246123077"} Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.223466 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.723447102 +0000 UTC m=+144.134557031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.254445 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mm5p" podStartSLOduration=124.254427324 podStartE2EDuration="2m4.254427324s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.247569556 +0000 UTC m=+143.658679475" watchObservedRunningTime="2026-01-22 13:46:04.254427324 +0000 UTC m=+143.665537253" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.264249 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" event={"ID":"3f91eb97-e4cc-4a67-9426-7aec499b4485","Type":"ContainerStarted","Data":"61a3fa95fd4a0fa91ff0dccbe0ce875b95e5770d8cc2831c9dd54d8ce1d26ba6"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.264326 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" event={"ID":"3f91eb97-e4cc-4a67-9426-7aec499b4485","Type":"ContainerStarted","Data":"3a7d0770e83ad70664df9e69ba0f3f806e8403b2ecdb4a1cf5a3de483a6c5fd6"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.283937 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" event={"ID":"0335a481-e6c1-459c-8325-5da8dfcbcdb1","Type":"ContainerStarted","Data":"b0ea037cd7cca93fb3844e9a96e5c8964f5fa0135c19062b7474d17fcd87d1e5"} Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.288752 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rcksw" podStartSLOduration=124.288732559 podStartE2EDuration="2m4.288732559s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.287839914 +0000 UTC m=+143.698949843" watchObservedRunningTime="2026-01-22 13:46:04.288732559 +0000 UTC m=+143.699842488" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.310421 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.332593 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.332688 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-9nmqg" podStartSLOduration=124.332667417 podStartE2EDuration="2m4.332667417s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:04.324119622 +0000 UTC m=+143.735229551" watchObservedRunningTime="2026-01-22 13:46:04.332667417 +0000 UTC m=+143.743777346" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.337636 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.837621274 +0000 UTC m=+144.248731203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.376945 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.378220 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.381217 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.385932 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434317 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434691 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.434713 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.434813 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:04.934784018 +0000 UTC m=+144.345893947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.536671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.537068 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.537090 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.537149 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.537988 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.037973467 +0000 UTC m=+144.449083396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.538558 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.538710 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.571634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"redhat-marketplace-v8jk5\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.639165 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.639343 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.139318116 +0000 UTC m=+144.550428045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.639498 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.639829 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.139821639 +0000 UTC m=+144.550931568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.739876 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.740071 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.240040547 +0000 UTC m=+144.651150466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.740215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.740659 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.240650314 +0000 UTC m=+144.651760243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.756517 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.757448 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.777747 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.804458 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.841218 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.844143 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.34408113 +0000 UTC m=+144.755191059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863619 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863668 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.863892 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.864298 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.364280896 +0000 UTC m=+144.775390825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964480 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.964815 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.965214 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: E0122 13:46:04.965280 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.465265735 +0000 UTC m=+144.876375664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.965686 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.990119 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:04 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:04 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:04 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.990193 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:04 crc kubenswrapper[4769]: I0122 13:46:04.999732 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"redhat-marketplace-j2rz6\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.019981 4769 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.068853 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: E0122 13:46:05.069276 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.569257986 +0000 UTC m=+144.980367915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jhd8d" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.087404 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.170244 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:05 crc kubenswrapper[4769]: E0122 13:46:05.170769 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 13:46:05.6707503 +0000 UTC m=+145.081860229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.238195 4769 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T13:46:05.020019271Z","Handler":null,"Name":""} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.244816 4769 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.244848 4769 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.272291 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.275713 4769 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.277115 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.283857 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:46:05 crc kubenswrapper[4769]: W0122 13:46:05.285696 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fbf5655_9685_4e15_a6af_41793097be11.slice/crio-a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559 WatchSource:0}: Error finding container a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559: Status 404 returned error can't find the container with id a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.306296 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"3f92cbd07839fbaa3d584c387dc2cafe2802444ba5d5904cc7a5d5ed77b73e8c"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.306334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"5b5ad09fdc86a17007c33355037be7b7436f7222bd66d3af98cfc8a19f27a448"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.308333 4769 generic.go:334] "Generic (PLEG): container finished" podID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerID="acd4331bf5a97dd63bc534d1279a9dc1a57106f0b79215b9c6214a3510910a34" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.308380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"acd4331bf5a97dd63bc534d1279a9dc1a57106f0b79215b9c6214a3510910a34"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.308395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerStarted","Data":"9d4a213a14f5a21b9ecd231875d6aa22cbbfb7d75a58db27a2f98d97feb1dafb"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.310322 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.315061 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerID="f32dd634065691a644d2461a7fae6aa8b2a0092557591202f1589d051602d962" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.315129 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"f32dd634065691a644d2461a7fae6aa8b2a0092557591202f1589d051602d962"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.315153 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerStarted","Data":"87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.338596 4769 generic.go:334] "Generic (PLEG): container finished" podID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.338681 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.338710 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerStarted","Data":"b542c5dbcb707bb656b636afb6aa1bcc3a67f0090bf88281e297bd475aa9bd3f"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.341299 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jhd8d\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.343499 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b69c283-f109-4f09-9a01-8d21d3764892" containerID="046d05b3f47f3e1cd122e05caaffbaade2a750f09bb666394477d6007a1313e9" exitCode=0 Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.343669 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"046d05b3f47f3e1cd122e05caaffbaade2a750f09bb666394477d6007a1313e9"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.343774 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerStarted","Data":"95901b43f1b0b192d242724acdf435d55c1a459bc7ffc435091c0491b7b2a77a"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.353083 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.367366 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rkk84" event={"ID":"bf805bae-0da1-4a8b-a8c8-6c99cf8ce515","Type":"ContainerStarted","Data":"48f5c380a7ea4ee98b4e34be622cd179a9b205dd6bf31d14cd339a36d7938822"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.367472 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rkk84" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.374814 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.381879 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" event={"ID":"15723c66-27d3-4cea-9962-e75bbe7bb967","Type":"ContainerStarted","Data":"3cad4256a432d1a1f02170ee5ecd3bf344bb54c0f2b371cbc98acd9bbe0e5542"} Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.403943 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.463359 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.474529 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rkk84" podStartSLOduration=10.474512347 podStartE2EDuration="10.474512347s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:05.465357806 +0000 UTC m=+144.876467745" watchObservedRunningTime="2026-01-22 13:46:05.474512347 +0000 UTC m=+144.885622276" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.474810 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" podStartSLOduration=126.474805576 podStartE2EDuration="2m6.474805576s" podCreationTimestamp="2026-01-22 13:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:05.450224969 +0000 UTC m=+144.861334918" watchObservedRunningTime="2026-01-22 13:46:05.474805576 +0000 UTC m=+144.885915505" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.722440 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:46:05 crc kubenswrapper[4769]: W0122 13:46:05.729092 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75dcccce_425a_46ab_bfeb_dc5a0ee835d4.slice/crio-65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a WatchSource:0}: Error finding container 65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a: Status 404 returned error can't find the container with id 65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.752464 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.753474 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.756676 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.784382 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.784513 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.784562 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.829782 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.885452 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.885527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.885564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.886933 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.887031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.914775 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"redhat-operators-k2w22\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.988341 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:05 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:05 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:05 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:05 crc kubenswrapper[4769]: I0122 13:46:05.988408 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.138424 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.151627 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.152602 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.166061 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.190364 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.190485 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.190567 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.291902 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292016 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292126 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292421 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.292461 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.314403 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"redhat-operators-9x475\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.441748 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerStarted","Data":"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.442127 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerStarted","Data":"65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.442164 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.468774 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" podStartSLOduration=126.468754927 podStartE2EDuration="2m6.468754927s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:06.466585037 +0000 UTC m=+145.877694966" watchObservedRunningTime="2026-01-22 13:46:06.468754927 +0000 UTC m=+145.879864866" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.470495 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.474259 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.496700 4769 generic.go:334] "Generic (PLEG): container finished" podID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerID="bd94526c2545e7d42d2caa419fef7b4eaae03cecfaac7722e27dfd4ed49fa03a" exitCode=0 Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.496800 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"bd94526c2545e7d42d2caa419fef7b4eaae03cecfaac7722e27dfd4ed49fa03a"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.496848 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerStarted","Data":"6e66e2dbf8bc8a080c55b13a7260516fe1212a4c0154bcf230d5878c8ebeeeed"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.529089 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.559029 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" event={"ID":"6e9c7f00-95b3-4453-8d82-df8b88a2bc8a","Type":"ContainerStarted","Data":"bc5c05abf51e8270472b3dd332fa8bf294f31fb227e2e85b20e544ed47f8d921"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.594078 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fbf5655-9685-4e15-a6af-41793097be11" containerID="3502879dadc38b5cd99def96e405968a047479756eeea61ee2071af582a36fdd" exitCode=0 Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.594298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"3502879dadc38b5cd99def96e405968a047479756eeea61ee2071af582a36fdd"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.594360 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerStarted","Data":"a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559"} Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.643477 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-xdxvs" podStartSLOduration=11.643459734 podStartE2EDuration="11.643459734s" podCreationTimestamp="2026-01-22 13:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:06.643020122 +0000 UTC m=+146.054130061" watchObservedRunningTime="2026-01-22 13:46:06.643459734 +0000 UTC m=+146.054569663" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.910198 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.945413 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:46:06 crc kubenswrapper[4769]: W0122 13:46:06.962004 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod143027dc_ac6a_442f_bf57_3dcd7efd0427.slice/crio-eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b WatchSource:0}: Error finding container eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b: Status 404 returned error can't find the container with id eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.988556 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:06 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:06 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:06 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:06 crc kubenswrapper[4769]: I0122 13:46:06.988608 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.317990 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.318044 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.318882 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-mgft7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.319129 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mgft7" podUID="92eb7fb7-d1b8-45ad-b8ff-8411d04eb048" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.490779 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.491230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.497854 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.498219 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.498258 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.505331 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.511575 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.511704 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.513659 4769 patch_prober.go:28] interesting pod/console-f9d7485db-nwrtw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.513725 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nwrtw" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.643175 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerDied","Data":"e652943776f78a5fd95ced60a7e853ebc62ea8a256a4dea93d8512bf63d1796f"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.643248 4769 generic.go:334] "Generic (PLEG): container finished" podID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerID="e652943776f78a5fd95ced60a7e853ebc62ea8a256a4dea93d8512bf63d1796f" exitCode=0 Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.654066 4769 generic.go:334] "Generic (PLEG): container finished" podID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerID="5773768bc9993d556325ab6b5012f24996ced11ddc55ad2bd215bb338220f42b" exitCode=0 Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.654121 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"5773768bc9993d556325ab6b5012f24996ced11ddc55ad2bd215bb338220f42b"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.654145 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerStarted","Data":"ab73ea8d8d9a566fef3480c2969fb2296deb50f4ddfdc8ecead203c9dda4e719"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.657915 4769 generic.go:334] "Generic (PLEG): container finished" podID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" exitCode=0 Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.658037 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.658055 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerStarted","Data":"eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b"} Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.664756 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jjt2k" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.665210 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t5985" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.935368 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.935454 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.948618 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.956567 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.986446 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.990371 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:07 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:07 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:07 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:07 crc kubenswrapper[4769]: I0122 13:46:07.990430 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.037243 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.037335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.041531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.053066 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.107038 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.134093 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.148636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.183915 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.184694 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.186893 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.190907 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.186718 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.353377 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.353726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.455434 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.455494 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.455574 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.478414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.525490 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.725095 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c99d92123863bc1d707dc9890e2e74fa177cd611a96a52755527862f9ed84368"} Jan 22 13:46:08 crc kubenswrapper[4769]: I0122 13:46:08.998912 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:08 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:08 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:08 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:08.999341 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.172465 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.174991 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 13:46:09 crc kubenswrapper[4769]: W0122 13:46:09.197195 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poded99cfde_1902_4453_9add_80bcda64e51f.slice/crio-24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20 WatchSource:0}: Error finding container 24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20: Status 404 returned error can't find the container with id 24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20 Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.279532 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") pod \"3ef7a187-ce98-488c-a9b0-e16449e2882f\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.279618 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") pod \"3ef7a187-ce98-488c-a9b0-e16449e2882f\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.279638 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") pod \"3ef7a187-ce98-488c-a9b0-e16449e2882f\" (UID: \"3ef7a187-ce98-488c-a9b0-e16449e2882f\") " Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.280448 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ef7a187-ce98-488c-a9b0-e16449e2882f" (UID: "3ef7a187-ce98-488c-a9b0-e16449e2882f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.293027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ef7a187-ce98-488c-a9b0-e16449e2882f" (UID: "3ef7a187-ce98-488c-a9b0-e16449e2882f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.294265 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874" (OuterVolumeSpecName: "kube-api-access-n8874") pod "3ef7a187-ce98-488c-a9b0-e16449e2882f" (UID: "3ef7a187-ce98-488c-a9b0-e16449e2882f"). InnerVolumeSpecName "kube-api-access-n8874". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.381384 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef7a187-ce98-488c-a9b0-e16449e2882f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.381421 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8874\" (UniqueName: \"kubernetes.io/projected/3ef7a187-ce98-488c-a9b0-e16449e2882f-kube-api-access-n8874\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.381471 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ef7a187-ce98-488c-a9b0-e16449e2882f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.748486 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerStarted","Data":"24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.751065 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" event={"ID":"3ef7a187-ce98-488c-a9b0-e16449e2882f","Type":"ContainerDied","Data":"b5f0b3f3f7b7a0b35bdff04091a4f43dc2a4d7a638db51c8e64ac5ca77fff8bf"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.751097 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f0b3f3f7b7a0b35bdff04091a4f43dc2a4d7a638db51c8e64ac5ca77fff8bf" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.751153 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484825-hgsdh" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.756308 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8283411590af6ac01c407eb5eac96c45560649f3eed1ec2d108aacafba468b5c"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.756412 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.766109 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b2337fd96f64c22418ef9b022ca0c9a1e82691be7d47643651c83f901b1b9110"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.775985 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"75eaabaf74ef52dc0ddf7f9dae2d842ae826de4142370d68a79a182670b120fc"} Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.988031 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:09 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:09 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:09 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:09 crc kubenswrapper[4769]: I0122 13:46:09.988089 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.481723 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.481842 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.793215 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bc32aa32cf748cd584d5cfeb225a4682c619a4b9f7a5ba38151e4aad68ec7d04"} Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.808402 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d3cb45eeee556f0f1d0899e75c07fef57250967dace39b43969090ad0ff41dff"} Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.817563 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerStarted","Data":"00f3666902563fa3aae0f23c8fc0eed6fb06623043f3bbcf88522aa9cb27e647"} Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.879642 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.879624261 podStartE2EDuration="2.879624261s" podCreationTimestamp="2026-01-22 13:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:10.876897846 +0000 UTC m=+150.288007775" watchObservedRunningTime="2026-01-22 13:46:10.879624261 +0000 UTC m=+150.290734190" Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.987828 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:10 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:10 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:10 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:10 crc kubenswrapper[4769]: I0122 13:46:10.987881 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.392694 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 13:46:11 crc kubenswrapper[4769]: E0122 13:46:11.394075 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerName="collect-profiles" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.394092 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerName="collect-profiles" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.394260 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef7a187-ce98-488c-a9b0-e16449e2882f" containerName="collect-profiles" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.395034 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.396686 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.397975 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.398370 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.454439 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.454522 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.556671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.556782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.556904 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.609400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.731844 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.826024 4769 generic.go:334] "Generic (PLEG): container finished" podID="ed99cfde-1902-4453-9add-80bcda64e51f" containerID="00f3666902563fa3aae0f23c8fc0eed6fb06623043f3bbcf88522aa9cb27e647" exitCode=0 Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.827184 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerDied","Data":"00f3666902563fa3aae0f23c8fc0eed6fb06623043f3bbcf88522aa9cb27e647"} Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.987870 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:11 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:11 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:11 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:11 crc kubenswrapper[4769]: I0122 13:46:11.988256 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:12 crc kubenswrapper[4769]: I0122 13:46:12.022535 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 13:46:12 crc kubenswrapper[4769]: W0122 13:46:12.033387 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod36001332_1cc9_44dc_8137_c117c2101ecd.slice/crio-999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b WatchSource:0}: Error finding container 999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b: Status 404 returned error can't find the container with id 999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b Jan 22 13:46:12 crc kubenswrapper[4769]: I0122 13:46:12.835368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerStarted","Data":"999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b"} Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:12.999978 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:13 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:13 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:13 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.000056 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.501037 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rkk84" Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.845160 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerStarted","Data":"d40eb4c56433a3c051eab9532b06a720b749ca810d2cdaf3cacba78fc2ce3050"} Jan 22 13:46:13 crc kubenswrapper[4769]: I0122 13:46:13.859263 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.8592394519999997 podStartE2EDuration="2.859239452s" podCreationTimestamp="2026-01-22 13:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:13.85624282 +0000 UTC m=+153.267352749" watchObservedRunningTime="2026-01-22 13:46:13.859239452 +0000 UTC m=+153.270349381" Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.003432 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:14 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.003504 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.872851 4769 generic.go:334] "Generic (PLEG): container finished" podID="36001332-1cc9-44dc-8137-c117c2101ecd" containerID="d40eb4c56433a3c051eab9532b06a720b749ca810d2cdaf3cacba78fc2ce3050" exitCode=0 Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.872890 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerDied","Data":"d40eb4c56433a3c051eab9532b06a720b749ca810d2cdaf3cacba78fc2ce3050"} Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.993399 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:14 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:14 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:14 crc kubenswrapper[4769]: I0122 13:46:14.993775 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.003947 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:16 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.004023 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.995592 4769 patch_prober.go:28] interesting pod/router-default-5444994796-pb7qw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Jan 22 13:46:16 crc kubenswrapper[4769]: [+]process-running ok Jan 22 13:46:16 crc kubenswrapper[4769]: healthz check failed Jan 22 13:46:16 crc kubenswrapper[4769]: I0122 13:46:16.995663 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pb7qw" podUID="5c5cf556-ec03-4f29-94ed-13a58f54275c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.322211 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mgft7" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.511509 4769 patch_prober.go:28] interesting pod/console-f9d7485db-nwrtw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.511565 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-nwrtw" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.987568 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:17 crc kubenswrapper[4769]: I0122 13:46:17.991054 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pb7qw" Jan 22 13:46:22 crc kubenswrapper[4769]: I0122 13:46:22.731577 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:46:22 crc kubenswrapper[4769]: I0122 13:46:22.739414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9764ff0b-ae92-470b-af85-7c8bb41642ba-metrics-certs\") pod \"network-metrics-daemon-cfh49\" (UID: \"9764ff0b-ae92-470b-af85-7c8bb41642ba\") " pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:46:22 crc kubenswrapper[4769]: I0122 13:46:22.904736 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cfh49" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.322300 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.330282 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") pod \"36001332-1cc9-44dc-8137-c117c2101ecd\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456063 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") pod \"36001332-1cc9-44dc-8137-c117c2101ecd\" (UID: \"36001332-1cc9-44dc-8137-c117c2101ecd\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456078 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36001332-1cc9-44dc-8137-c117c2101ecd" (UID: "36001332-1cc9-44dc-8137-c117c2101ecd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") pod \"ed99cfde-1902-4453-9add-80bcda64e51f\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456208 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") pod \"ed99cfde-1902-4453-9add-80bcda64e51f\" (UID: \"ed99cfde-1902-4453-9add-80bcda64e51f\") " Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456234 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed99cfde-1902-4453-9add-80bcda64e51f" (UID: "ed99cfde-1902-4453-9add-80bcda64e51f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456400 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36001332-1cc9-44dc-8137-c117c2101ecd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.456410 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed99cfde-1902-4453-9add-80bcda64e51f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.463128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36001332-1cc9-44dc-8137-c117c2101ecd" (UID: "36001332-1cc9-44dc-8137-c117c2101ecd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.463176 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed99cfde-1902-4453-9add-80bcda64e51f" (UID: "ed99cfde-1902-4453-9add-80bcda64e51f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.557719 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed99cfde-1902-4453-9add-80bcda64e51f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.557775 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36001332-1cc9-44dc-8137-c117c2101ecd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.934055 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.934067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ed99cfde-1902-4453-9add-80bcda64e51f","Type":"ContainerDied","Data":"24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20"} Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.934710 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d852b9d3e7cd202857e267965466d3a4c751edcfae7482682b1cecb449ab20" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.936566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"36001332-1cc9-44dc-8137-c117c2101ecd","Type":"ContainerDied","Data":"999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b"} Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.936607 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="999093893dbc0da449822486168213db3d159de59ac829e76e33c04a73e8847b" Jan 22 13:46:24 crc kubenswrapper[4769]: I0122 13:46:24.936676 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 13:46:25 crc kubenswrapper[4769]: I0122 13:46:25.471481 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:46:27 crc kubenswrapper[4769]: I0122 13:46:27.589972 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:27 crc kubenswrapper[4769]: I0122 13:46:27.597141 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:46:37 crc kubenswrapper[4769]: I0122 13:46:37.807482 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-jr9vm" Jan 22 13:46:38 crc kubenswrapper[4769]: I0122 13:46:38.111498 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 13:46:39 crc kubenswrapper[4769]: E0122 13:46:39.277872 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 13:46:39 crc kubenswrapper[4769]: E0122 13:46:39.278443 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x86gf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lxbp4_openshift-marketplace(7d9e80ce-c46e-4a99-814e-0d9b1b65623f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:39 crc kubenswrapper[4769]: E0122 13:46:39.280206 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-lxbp4" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.350446 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lxbp4" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.413263 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.413440 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xmkrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2ks9m_openshift-marketplace(bc744951-0370-42be-a1c0-e639d8d8cd31): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:40 crc kubenswrapper[4769]: E0122 13:46:40.414774 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2ks9m" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" Jan 22 13:46:40 crc kubenswrapper[4769]: I0122 13:46:40.482151 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:46:40 crc kubenswrapper[4769]: I0122 13:46:40.482223 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.307965 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2ks9m" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.381671 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.381862 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hjqjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-9x475_openshift-marketplace(143027dc-ac6a-442f-bf57-3dcd7efd0427): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.383562 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-9x475" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.385587 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.385697 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkpck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-k2w22_openshift-marketplace(652c2c5a-f885-4bf3-a4f8-73a4717f6a3a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:43 crc kubenswrapper[4769]: E0122 13:46:43.387266 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-k2w22" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.434859 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-k2w22" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.434906 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-9x475" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.498092 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.498609 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mn7q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-j2rz6_openshift-marketplace(9fbf5655-9685-4e15-a6af-41793097be11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.499831 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-j2rz6" podUID="9fbf5655-9685-4e15-a6af-41793097be11" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.543461 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.543611 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xx5tc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7wh4n_openshift-marketplace(4f403243-0359-478d-a3a6-29a8f0bc29e2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.544784 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7wh4n" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.561579 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.562613 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm4mw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v8jk5_openshift-marketplace(98dd81ac-1a92-4d5a-9e09-bcc49ac33a85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.563905 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v8jk5" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.572432 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.572739 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj54v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5rnmz_openshift-marketplace(3b69c283-f109-4f09-9a01-8d21d3764892): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 13:46:44 crc kubenswrapper[4769]: E0122 13:46:44.577027 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5rnmz" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" Jan 22 13:46:44 crc kubenswrapper[4769]: I0122 13:46:44.825144 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cfh49"] Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.047657 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cfh49" event={"ID":"9764ff0b-ae92-470b-af85-7c8bb41642ba","Type":"ContainerStarted","Data":"871759f0c2cb1bf835a48fe1c3c45df35d209a15e67a90ded611c851eb461ac2"} Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.048075 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cfh49" event={"ID":"9764ff0b-ae92-470b-af85-7c8bb41642ba","Type":"ContainerStarted","Data":"fc128d161cc56dbd9945fc65e631262910146990d95c0102a3359c6af7774ef5"} Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.049185 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7wh4n" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.056241 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-j2rz6" podUID="9fbf5655-9685-4e15-a6af-41793097be11" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.056335 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5rnmz" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.056342 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v8jk5" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989055 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.989326 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36001332-1cc9-44dc-8137-c117c2101ecd" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989342 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="36001332-1cc9-44dc-8137-c117c2101ecd" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: E0122 13:46:45.989364 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed99cfde-1902-4453-9add-80bcda64e51f" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989374 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed99cfde-1902-4453-9add-80bcda64e51f" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.989519 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="36001332-1cc9-44dc-8137-c117c2101ecd" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.990266 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed99cfde-1902-4453-9add-80bcda64e51f" containerName="pruner" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.990856 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.994361 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 13:46:45 crc kubenswrapper[4769]: I0122 13:46:45.997299 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.006116 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.054178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cfh49" event={"ID":"9764ff0b-ae92-470b-af85-7c8bb41642ba","Type":"ContainerStarted","Data":"c9c7117195a6c56a6c7c00d6deb5e9326aa93080a7e4bb2226cdd4bcfe164637"} Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.058234 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.058289 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.070291 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cfh49" podStartSLOduration=166.070270777 podStartE2EDuration="2m46.070270777s" podCreationTimestamp="2026-01-22 13:44:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:46.066696809 +0000 UTC m=+185.477806758" watchObservedRunningTime="2026-01-22 13:46:46.070270777 +0000 UTC m=+185.481380706" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.159805 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.159918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.160008 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.184537 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.313939 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:46 crc kubenswrapper[4769]: I0122 13:46:46.717180 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 13:46:47 crc kubenswrapper[4769]: I0122 13:46:47.061616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2144f5ad-561d-4f3f-bc49-dae55cb0773f","Type":"ContainerStarted","Data":"54a1e96488be8112c2c484ff5689f16167bf622b7f0a90f3d28a31e125f9d56a"} Jan 22 13:46:48 crc kubenswrapper[4769]: I0122 13:46:48.071688 4769 generic.go:334] "Generic (PLEG): container finished" podID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerID="989e3ac043272fed98dde5e78a5ad367a612ccbb3669b94d0f2d4e845f33992f" exitCode=0 Jan 22 13:46:48 crc kubenswrapper[4769]: I0122 13:46:48.071774 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2144f5ad-561d-4f3f-bc49-dae55cb0773f","Type":"ContainerDied","Data":"989e3ac043272fed98dde5e78a5ad367a612ccbb3669b94d0f2d4e845f33992f"} Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.297214 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.310186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") pod \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.310271 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") pod \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\" (UID: \"2144f5ad-561d-4f3f-bc49-dae55cb0773f\") " Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.310499 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2144f5ad-561d-4f3f-bc49-dae55cb0773f" (UID: "2144f5ad-561d-4f3f-bc49-dae55cb0773f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.321818 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2144f5ad-561d-4f3f-bc49-dae55cb0773f" (UID: "2144f5ad-561d-4f3f-bc49-dae55cb0773f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.412174 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:49 crc kubenswrapper[4769]: I0122 13:46:49.412419 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2144f5ad-561d-4f3f-bc49-dae55cb0773f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:46:50 crc kubenswrapper[4769]: I0122 13:46:50.091158 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2144f5ad-561d-4f3f-bc49-dae55cb0773f","Type":"ContainerDied","Data":"54a1e96488be8112c2c484ff5689f16167bf622b7f0a90f3d28a31e125f9d56a"} Jan 22 13:46:50 crc kubenswrapper[4769]: I0122 13:46:50.091452 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a1e96488be8112c2c484ff5689f16167bf622b7f0a90f3d28a31e125f9d56a" Jan 22 13:46:50 crc kubenswrapper[4769]: I0122 13:46:50.091670 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.588842 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 13:46:52 crc kubenswrapper[4769]: E0122 13:46:52.589353 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerName="pruner" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.589386 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerName="pruner" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.589627 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2144f5ad-561d-4f3f-bc49-dae55cb0773f" containerName="pruner" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.590417 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.596763 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.598403 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.605977 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.654366 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.654439 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.654484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755204 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755466 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755580 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.755714 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.773760 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"installer-9-crc\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:52 crc kubenswrapper[4769]: I0122 13:46:52.907172 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:46:53 crc kubenswrapper[4769]: I0122 13:46:53.079550 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 13:46:53 crc kubenswrapper[4769]: I0122 13:46:53.105730 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerStarted","Data":"cef04179ac91b5e7825693fb666c552ce048659165cf412a395f896a85539fbc"} Jan 22 13:46:55 crc kubenswrapper[4769]: I0122 13:46:55.117205 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerStarted","Data":"0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b"} Jan 22 13:46:55 crc kubenswrapper[4769]: I0122 13:46:55.118758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerStarted","Data":"4c41b665319b212a65ed0ded3d69aee9bf5218eae07c0bc2b667f9ac261cd977"} Jan 22 13:46:55 crc kubenswrapper[4769]: I0122 13:46:55.136382 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.136359031 podStartE2EDuration="3.136359031s" podCreationTimestamp="2026-01-22 13:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:46:55.133313426 +0000 UTC m=+194.544423365" watchObservedRunningTime="2026-01-22 13:46:55.136359031 +0000 UTC m=+194.547468950" Jan 22 13:46:56 crc kubenswrapper[4769]: I0122 13:46:56.125435 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerID="0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b" exitCode=0 Jan 22 13:46:56 crc kubenswrapper[4769]: I0122 13:46:56.125512 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b"} Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.135259 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerStarted","Data":"40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1"} Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.138329 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fbf5655-9685-4e15-a6af-41793097be11" containerID="2093f881d46af13d52d1fd20f110b59c6f048ae5d26012e9bdb3824ba5bc9f97" exitCode=0 Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.138380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"2093f881d46af13d52d1fd20f110b59c6f048ae5d26012e9bdb3824ba5bc9f97"} Jan 22 13:46:57 crc kubenswrapper[4769]: I0122 13:46:57.157301 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxbp4" podStartSLOduration=3.95828838 podStartE2EDuration="55.157281101s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.317447085 +0000 UTC m=+144.728557014" lastFinishedPulling="2026-01-22 13:46:56.516439806 +0000 UTC m=+195.927549735" observedRunningTime="2026-01-22 13:46:57.154075753 +0000 UTC m=+196.565185682" watchObservedRunningTime="2026-01-22 13:46:57.157281101 +0000 UTC m=+196.568391030" Jan 22 13:46:58 crc kubenswrapper[4769]: I0122 13:46:58.145660 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerStarted","Data":"2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df"} Jan 22 13:46:58 crc kubenswrapper[4769]: I0122 13:46:58.169421 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2rz6" podStartSLOduration=3.2548638690000002 podStartE2EDuration="54.169401742s" podCreationTimestamp="2026-01-22 13:46:04 +0000 UTC" firstStartedPulling="2026-01-22 13:46:06.607914416 +0000 UTC m=+146.019024345" lastFinishedPulling="2026-01-22 13:46:57.522452289 +0000 UTC m=+196.933562218" observedRunningTime="2026-01-22 13:46:58.165481784 +0000 UTC m=+197.576591713" watchObservedRunningTime="2026-01-22 13:46:58.169401742 +0000 UTC m=+197.580511671" Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.152356 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b69c283-f109-4f09-9a01-8d21d3764892" containerID="e400121af3cd67eb8bf5be7255f64ed7758734a95d64ae486777a9d10ec8aeb7" exitCode=0 Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.152443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"e400121af3cd67eb8bf5be7255f64ed7758734a95d64ae486777a9d10ec8aeb7"} Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.154432 4769 generic.go:334] "Generic (PLEG): container finished" podID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerID="fa803241b9a5ea5819645ac5f5279180cdfd0cd95f936430c68e37095716dc0b" exitCode=0 Jan 22 13:46:59 crc kubenswrapper[4769]: I0122 13:46:59.154470 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"fa803241b9a5ea5819645ac5f5279180cdfd0cd95f936430c68e37095716dc0b"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.164039 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerStarted","Data":"1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.166067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerStarted","Data":"7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.168039 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerStarted","Data":"d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b"} Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.187586 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rnmz" podStartSLOduration=3.963075941 podStartE2EDuration="58.187566295s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.34666331 +0000 UTC m=+144.757773239" lastFinishedPulling="2026-01-22 13:46:59.571153664 +0000 UTC m=+198.982263593" observedRunningTime="2026-01-22 13:47:00.184420789 +0000 UTC m=+199.595530718" watchObservedRunningTime="2026-01-22 13:47:00.187566295 +0000 UTC m=+199.598676224" Jan 22 13:47:00 crc kubenswrapper[4769]: I0122 13:47:00.204400 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2w22" podStartSLOduration=3.33248322 podStartE2EDuration="55.204380838s" podCreationTimestamp="2026-01-22 13:46:05 +0000 UTC" firstStartedPulling="2026-01-22 13:46:07.665129217 +0000 UTC m=+147.076239146" lastFinishedPulling="2026-01-22 13:46:59.537026835 +0000 UTC m=+198.948136764" observedRunningTime="2026-01-22 13:47:00.201024216 +0000 UTC m=+199.612134155" watchObservedRunningTime="2026-01-22 13:47:00.204380838 +0000 UTC m=+199.615490767" Jan 22 13:47:01 crc kubenswrapper[4769]: I0122 13:47:01.174707 4769 generic.go:334] "Generic (PLEG): container finished" podID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerID="7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0" exitCode=0 Jan 22 13:47:01 crc kubenswrapper[4769]: I0122 13:47:01.174748 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0"} Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.056810 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.057210 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.326432 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.362613 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.387738 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.387813 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:03 crc kubenswrapper[4769]: I0122 13:47:03.476588 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:04 crc kubenswrapper[4769]: I0122 13:47:04.238382 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.088739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.088856 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.155221 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.243207 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:05 crc kubenswrapper[4769]: I0122 13:47:05.562749 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.138942 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.139318 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.182107 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.206338 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5rnmz" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" containerID="cri-o://1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8" gracePeriod=2 Jan 22 13:47:06 crc kubenswrapper[4769]: I0122 13:47:06.241496 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:47:07 crc kubenswrapper[4769]: I0122 13:47:07.361204 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:47:07 crc kubenswrapper[4769]: I0122 13:47:07.361566 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2rz6" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" containerID="cri-o://2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df" gracePeriod=2 Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.228701 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fbf5655-9685-4e15-a6af-41793097be11" containerID="2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df" exitCode=0 Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.228781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df"} Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.232233 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b69c283-f109-4f09-9a01-8d21d3764892" containerID="1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8" exitCode=0 Jan 22 13:47:09 crc kubenswrapper[4769]: I0122 13:47:09.232264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8"} Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.481847 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.481925 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.481975 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.482499 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:47:10 crc kubenswrapper[4769]: I0122 13:47:10.482643 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d" gracePeriod=600 Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.246407 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d" exitCode=0 Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.246483 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d"} Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.964098 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:11 crc kubenswrapper[4769]: I0122 13:47:11.976268 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.088556 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") pod \"3b69c283-f109-4f09-9a01-8d21d3764892\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.088738 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") pod \"9fbf5655-9685-4e15-a6af-41793097be11\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.088956 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") pod \"3b69c283-f109-4f09-9a01-8d21d3764892\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.089103 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") pod \"9fbf5655-9685-4e15-a6af-41793097be11\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.089176 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") pod \"9fbf5655-9685-4e15-a6af-41793097be11\" (UID: \"9fbf5655-9685-4e15-a6af-41793097be11\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.089206 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") pod \"3b69c283-f109-4f09-9a01-8d21d3764892\" (UID: \"3b69c283-f109-4f09-9a01-8d21d3764892\") " Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.090561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities" (OuterVolumeSpecName: "utilities") pod "9fbf5655-9685-4e15-a6af-41793097be11" (UID: "9fbf5655-9685-4e15-a6af-41793097be11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.090988 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities" (OuterVolumeSpecName: "utilities") pod "3b69c283-f109-4f09-9a01-8d21d3764892" (UID: "3b69c283-f109-4f09-9a01-8d21d3764892"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.095340 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6" (OuterVolumeSpecName: "kube-api-access-mn7q6") pod "9fbf5655-9685-4e15-a6af-41793097be11" (UID: "9fbf5655-9685-4e15-a6af-41793097be11"). InnerVolumeSpecName "kube-api-access-mn7q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.097307 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v" (OuterVolumeSpecName: "kube-api-access-gj54v") pod "3b69c283-f109-4f09-9a01-8d21d3764892" (UID: "3b69c283-f109-4f09-9a01-8d21d3764892"). InnerVolumeSpecName "kube-api-access-gj54v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.115128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9fbf5655-9685-4e15-a6af-41793097be11" (UID: "9fbf5655-9685-4e15-a6af-41793097be11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.152562 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b69c283-f109-4f09-9a01-8d21d3764892" (UID: "3b69c283-f109-4f09-9a01-8d21d3764892"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192414 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192460 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192472 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn7q6\" (UniqueName: \"kubernetes.io/projected/9fbf5655-9685-4e15-a6af-41793097be11-kube-api-access-mn7q6\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192487 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj54v\" (UniqueName: \"kubernetes.io/projected/3b69c283-f109-4f09-9a01-8d21d3764892-kube-api-access-gj54v\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192495 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b69c283-f109-4f09-9a01-8d21d3764892-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.192505 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9fbf5655-9685-4e15-a6af-41793097be11-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.255759 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rnmz" event={"ID":"3b69c283-f109-4f09-9a01-8d21d3764892","Type":"ContainerDied","Data":"95901b43f1b0b192d242724acdf435d55c1a459bc7ffc435091c0491b7b2a77a"} Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.255848 4769 scope.go:117] "RemoveContainer" containerID="1c8fc3cb530e77764cab2c943062502a2e038d4d2dc51fdf4d33f28c4197f9f8" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.256044 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rnmz" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.259913 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2rz6" event={"ID":"9fbf5655-9685-4e15-a6af-41793097be11","Type":"ContainerDied","Data":"a09f3ed86d9fde6e4e25dc5687d5358cea66879bd11fddb52ce0cdd1a1c76559"} Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.259949 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2rz6" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.285819 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.288548 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5rnmz"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.295039 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.298588 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2rz6"] Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.895135 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" path="/var/lib/kubelet/pods/3b69c283-f109-4f09-9a01-8d21d3764892/volumes" Jan 22 13:47:12 crc kubenswrapper[4769]: I0122 13:47:12.897541 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbf5655-9685-4e15-a6af-41793097be11" path="/var/lib/kubelet/pods/9fbf5655-9685-4e15-a6af-41793097be11/volumes" Jan 22 13:47:14 crc kubenswrapper[4769]: I0122 13:47:14.728106 4769 scope.go:117] "RemoveContainer" containerID="e400121af3cd67eb8bf5be7255f64ed7758734a95d64ae486777a9d10ec8aeb7" Jan 22 13:47:14 crc kubenswrapper[4769]: I0122 13:47:14.812099 4769 scope.go:117] "RemoveContainer" containerID="046d05b3f47f3e1cd122e05caaffbaade2a750f09bb666394477d6007a1313e9" Jan 22 13:47:15 crc kubenswrapper[4769]: I0122 13:47:15.830146 4769 scope.go:117] "RemoveContainer" containerID="2b7cfe6672ef75a7bbf8ae1ba009321f1510b8bb071422e60f3a5319d2a3d6df" Jan 22 13:47:15 crc kubenswrapper[4769]: I0122 13:47:15.907380 4769 scope.go:117] "RemoveContainer" containerID="2093f881d46af13d52d1fd20f110b59c6f048ae5d26012e9bdb3824ba5bc9f97" Jan 22 13:47:15 crc kubenswrapper[4769]: I0122 13:47:15.958613 4769 scope.go:117] "RemoveContainer" containerID="3502879dadc38b5cd99def96e405968a047479756eeea61ee2071af582a36fdd" Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.289769 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.293275 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerStarted","Data":"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.296591 4769 generic.go:334] "Generic (PLEG): container finished" podID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerID="19f11c0236c241f234013da4669e8dd67b3f4430afe2db85d03abaaa7cb48e7c" exitCode=0 Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.296667 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"19f11c0236c241f234013da4669e8dd67b3f4430afe2db85d03abaaa7cb48e7c"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.300152 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerStarted","Data":"0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.302230 4769 generic.go:334] "Generic (PLEG): container finished" podID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" exitCode=0 Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.302289 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6"} Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.330973 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ks9m" podStartSLOduration=3.8137846250000003 podStartE2EDuration="1m14.330944426s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.310103194 +0000 UTC m=+144.721213123" lastFinishedPulling="2026-01-22 13:47:15.827262995 +0000 UTC m=+215.238372924" observedRunningTime="2026-01-22 13:47:16.326904924 +0000 UTC m=+215.738014873" watchObservedRunningTime="2026-01-22 13:47:16.330944426 +0000 UTC m=+215.742054365" Jan 22 13:47:16 crc kubenswrapper[4769]: I0122 13:47:16.533232 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-jtzpg"] Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.311721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerStarted","Data":"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259"} Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.314286 4769 generic.go:334] "Generic (PLEG): container finished" podID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" exitCode=0 Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.314371 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9"} Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.316899 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerStarted","Data":"2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893"} Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.335202 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wh4n" podStartSLOduration=3.905389084 podStartE2EDuration="1m15.335183359s" podCreationTimestamp="2026-01-22 13:46:02 +0000 UTC" firstStartedPulling="2026-01-22 13:46:05.340693886 +0000 UTC m=+144.751803815" lastFinishedPulling="2026-01-22 13:47:16.770488161 +0000 UTC m=+216.181598090" observedRunningTime="2026-01-22 13:47:17.331594611 +0000 UTC m=+216.742704540" watchObservedRunningTime="2026-01-22 13:47:17.335183359 +0000 UTC m=+216.746293288" Jan 22 13:47:17 crc kubenswrapper[4769]: I0122 13:47:17.354345 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8jk5" podStartSLOduration=3.156676956 podStartE2EDuration="1m13.354324446s" podCreationTimestamp="2026-01-22 13:46:04 +0000 UTC" firstStartedPulling="2026-01-22 13:46:06.511090272 +0000 UTC m=+145.922200201" lastFinishedPulling="2026-01-22 13:47:16.708737762 +0000 UTC m=+216.119847691" observedRunningTime="2026-01-22 13:47:17.350389298 +0000 UTC m=+216.761499247" watchObservedRunningTime="2026-01-22 13:47:17.354324446 +0000 UTC m=+216.765434375" Jan 22 13:47:18 crc kubenswrapper[4769]: I0122 13:47:18.325218 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerStarted","Data":"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22"} Jan 22 13:47:18 crc kubenswrapper[4769]: I0122 13:47:18.346308 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9x475" podStartSLOduration=2.321950052 podStartE2EDuration="1m12.346291752s" podCreationTimestamp="2026-01-22 13:46:06 +0000 UTC" firstStartedPulling="2026-01-22 13:46:07.664390598 +0000 UTC m=+147.075500527" lastFinishedPulling="2026-01-22 13:47:17.688732298 +0000 UTC m=+217.099842227" observedRunningTime="2026-01-22 13:47:18.342658792 +0000 UTC m=+217.753768721" watchObservedRunningTime="2026-01-22 13:47:18.346291752 +0000 UTC m=+217.757401681" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.034744 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.035041 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.088747 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.373693 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.373755 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.397377 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:47:23 crc kubenswrapper[4769]: I0122 13:47:23.412769 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.408890 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.805909 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.805995 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:24 crc kubenswrapper[4769]: I0122 13:47:24.847637 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:25 crc kubenswrapper[4769]: I0122 13:47:25.426839 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:47:26 crc kubenswrapper[4769]: I0122 13:47:26.529823 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:26 crc kubenswrapper[4769]: I0122 13:47:26.530113 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:26 crc kubenswrapper[4769]: I0122 13:47:26.565274 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:27 crc kubenswrapper[4769]: I0122 13:47:27.425236 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:28 crc kubenswrapper[4769]: I0122 13:47:28.362582 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m"] Jan 22 13:47:28 crc kubenswrapper[4769]: I0122 13:47:28.363188 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ks9m" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" containerID="cri-o://0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3" gracePeriod=2 Jan 22 13:47:30 crc kubenswrapper[4769]: I0122 13:47:30.569234 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x475"] Jan 22 13:47:30 crc kubenswrapper[4769]: I0122 13:47:30.569693 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9x475" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" containerID="cri-o://a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" gracePeriod=2 Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.402737 4769 generic.go:334] "Generic (PLEG): container finished" podID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerID="0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3" exitCode=0 Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.402849 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3"} Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.910475 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:31 crc kubenswrapper[4769]: I0122 13:47:31.981289 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052205 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") pod \"143027dc-ac6a-442f-bf57-3dcd7efd0427\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") pod \"143027dc-ac6a-442f-bf57-3dcd7efd0427\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") pod \"bc744951-0370-42be-a1c0-e639d8d8cd31\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") pod \"bc744951-0370-42be-a1c0-e639d8d8cd31\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052403 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") pod \"143027dc-ac6a-442f-bf57-3dcd7efd0427\" (UID: \"143027dc-ac6a-442f-bf57-3dcd7efd0427\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.052485 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") pod \"bc744951-0370-42be-a1c0-e639d8d8cd31\" (UID: \"bc744951-0370-42be-a1c0-e639d8d8cd31\") " Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.054068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities" (OuterVolumeSpecName: "utilities") pod "143027dc-ac6a-442f-bf57-3dcd7efd0427" (UID: "143027dc-ac6a-442f-bf57-3dcd7efd0427"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.054609 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities" (OuterVolumeSpecName: "utilities") pod "bc744951-0370-42be-a1c0-e639d8d8cd31" (UID: "bc744951-0370-42be-a1c0-e639d8d8cd31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.058500 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf" (OuterVolumeSpecName: "kube-api-access-hjqjf") pod "143027dc-ac6a-442f-bf57-3dcd7efd0427" (UID: "143027dc-ac6a-442f-bf57-3dcd7efd0427"). InnerVolumeSpecName "kube-api-access-hjqjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.058538 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp" (OuterVolumeSpecName: "kube-api-access-xmkrp") pod "bc744951-0370-42be-a1c0-e639d8d8cd31" (UID: "bc744951-0370-42be-a1c0-e639d8d8cd31"). InnerVolumeSpecName "kube-api-access-xmkrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.114119 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc744951-0370-42be-a1c0-e639d8d8cd31" (UID: "bc744951-0370-42be-a1c0-e639d8d8cd31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155025 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155086 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155104 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc744951-0370-42be-a1c0-e639d8d8cd31-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155118 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjqjf\" (UniqueName: \"kubernetes.io/projected/143027dc-ac6a-442f-bf57-3dcd7efd0427-kube-api-access-hjqjf\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.155132 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmkrp\" (UniqueName: \"kubernetes.io/projected/bc744951-0370-42be-a1c0-e639d8d8cd31-kube-api-access-xmkrp\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.163945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "143027dc-ac6a-442f-bf57-3dcd7efd0427" (UID: "143027dc-ac6a-442f-bf57-3dcd7efd0427"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317555 4769 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317844 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317862 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317883 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317891 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317905 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317914 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317928 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317936 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317947 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317956 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.317972 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.317982 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318000 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318010 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318019 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318028 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318071 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318080 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318093 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318101 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318114 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318122 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-content" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.318135 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318143 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="extract-utilities" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318257 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b69c283-f109-4f09-9a01-8d21d3764892" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318276 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbf5655-9685-4e15-a6af-41793097be11" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318289 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318301 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" containerName="registry-server" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.318705 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322520 4769 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322571 4769 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322738 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322758 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322773 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322782 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322818 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322831 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322845 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322855 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322871 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322881 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322898 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322908 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.322921 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.322931 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323068 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323084 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323099 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323110 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323122 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.323133 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.324942 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325104 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325169 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325208 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.325252 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" gracePeriod=15 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340078 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340125 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340141 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340166 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340183 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340203 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340219 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340233 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.340276 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143027dc-ac6a-442f-bf57-3dcd7efd0427-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.353810 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440652 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440803 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440829 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440842 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440856 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440976 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.440997 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441016 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441046 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441070 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.441095 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.655337 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.852556 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ks9m" event={"ID":"bc744951-0370-42be-a1c0-e639d8d8cd31","Type":"ContainerDied","Data":"9d4a213a14f5a21b9ecd231875d6aa22cbbfb7d75a58db27a2f98d97feb1dafb"} Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.852595 4769 scope.go:117] "RemoveContainer" containerID="0818f3de0722e6de63433adafbc2984ceb47d784f262f5f25aac6b7ea434f1d3" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.852700 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ks9m" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.859017 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860189 4769 generic.go:334] "Generic (PLEG): container finished" podID="143027dc-ac6a-442f-bf57-3dcd7efd0427" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" exitCode=0 Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22"} Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860258 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x475" event={"ID":"143027dc-ac6a-442f-bf57-3dcd7efd0427","Type":"ContainerDied","Data":"eb0f0ad4dc9a1519cccefda94331b40c9be757f72e950d3d8010309da7e5d54b"} Jan 22 13:47:32 crc kubenswrapper[4769]: I0122 13:47:32.860326 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x475" Jan 22 13:47:32 crc kubenswrapper[4769]: E0122 13:47:32.917437 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d11aa91e7e10e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,LastTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.010598 4769 scope.go:117] "RemoveContainer" containerID="7a208431e8933c9e4e61cbd123e3fa30817703e607bc55c6193139bbbbb024a0" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.038562 4769 scope.go:117] "RemoveContainer" containerID="acd4331bf5a97dd63bc534d1279a9dc1a57106f0b79215b9c6214a3510910a34" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.052014 4769 scope.go:117] "RemoveContainer" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.078048 4769 scope.go:117] "RemoveContainer" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.102833 4769 scope.go:117] "RemoveContainer" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.118196 4769 scope.go:117] "RemoveContainer" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.118578 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22\": container with ID starting with a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22 not found: ID does not exist" containerID="a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.118611 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22"} err="failed to get container status \"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22\": rpc error: code = NotFound desc = could not find container \"a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22\": container with ID starting with a327a36f7022c1a24c8a5b106ee59eef5d512a899727f29882a5d05c93111b22 not found: ID does not exist" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.118634 4769 scope.go:117] "RemoveContainer" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.118989 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9\": container with ID starting with e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9 not found: ID does not exist" containerID="e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.119007 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9"} err="failed to get container status \"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9\": rpc error: code = NotFound desc = could not find container \"e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9\": container with ID starting with e4d380a769da25ffa3d6e4f72472de743cdc4dd53dbe264e09a44596b45a58b9 not found: ID does not exist" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.119020 4769 scope.go:117] "RemoveContainer" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.119274 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb\": container with ID starting with 5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb not found: ID does not exist" containerID="5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.119294 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb"} err="failed to get container status \"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb\": rpc error: code = NotFound desc = could not find container \"5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb\": container with ID starting with 5a649e12e124f4a64a4f1afd91e39d4e717943b4a392b3b9c65213bb1e563adb not found: ID does not exist" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.570213 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.570674 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.571252 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.571842 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.572337 4769 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.572374 4769 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.572673 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="200ms" Jan 22 13:47:33 crc kubenswrapper[4769]: E0122 13:47:33.773487 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="400ms" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.867642 4769 generic.go:334] "Generic (PLEG): container finished" podID="98422033-e252-4416-9d6c-9a782f84a615" containerID="4c41b665319b212a65ed0ded3d69aee9bf5218eae07c0bc2b667f9ac261cd977" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.867735 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerDied","Data":"4c41b665319b212a65ed0ded3d69aee9bf5218eae07c0bc2b667f9ac261cd977"} Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.869462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce"} Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.869503 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5558e879799fd2ba6a9fcdb28caf045208b66d263eead1e6875aa65fba01d965"} Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.871193 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.872200 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.872973 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873015 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873025 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" exitCode=0 Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873036 4769 scope.go:117] "RemoveContainer" containerID="1c5dcd4cada4e9ef455bca6d771b434eb6bcfd04efca4a0cc9dc931fc972496d" Jan 22 13:47:33 crc kubenswrapper[4769]: I0122 13:47:33.873054 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" exitCode=2 Jan 22 13:47:34 crc kubenswrapper[4769]: E0122 13:47:34.174777 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="800ms" Jan 22 13:47:34 crc kubenswrapper[4769]: I0122 13:47:34.902987 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:34 crc kubenswrapper[4769]: E0122 13:47:34.976097 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="1.6s" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.173549 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.179326 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.180051 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") pod \"98422033-e252-4416-9d6c-9a782f84a615\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290893 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290919 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") pod \"98422033-e252-4416-9d6c-9a782f84a615\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290945 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") pod \"98422033-e252-4416-9d6c-9a782f84a615\" (UID: \"98422033-e252-4416-9d6c-9a782f84a615\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock" (OuterVolumeSpecName: "var-lock") pod "98422033-e252-4416-9d6c-9a782f84a615" (UID: "98422033-e252-4416-9d6c-9a782f84a615"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.290967 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291043 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "98422033-e252-4416-9d6c-9a782f84a615" (UID: "98422033-e252-4416-9d6c-9a782f84a615"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291061 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291109 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291503 4769 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291517 4769 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291526 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/98422033-e252-4416-9d6c-9a782f84a615-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291534 4769 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.291543 4769 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.299642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "98422033-e252-4416-9d6c-9a782f84a615" (UID: "98422033-e252-4416-9d6c-9a782f84a615"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:35 crc kubenswrapper[4769]: E0122 13:47:35.339837 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d11aa91e7e10e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,LastTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.392736 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98422033-e252-4416-9d6c-9a782f84a615-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.913572 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.913549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"98422033-e252-4416-9d6c-9a782f84a615","Type":"ContainerDied","Data":"cef04179ac91b5e7825693fb666c552ce048659165cf412a395f896a85539fbc"} Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.913628 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef04179ac91b5e7825693fb666c552ce048659165cf412a395f896a85539fbc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.916495 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.917888 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" exitCode=0 Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.917952 4769 scope.go:117] "RemoveContainer" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.917966 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.947554 4769 scope.go:117] "RemoveContainer" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.963776 4769 scope.go:117] "RemoveContainer" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" Jan 22 13:47:35 crc kubenswrapper[4769]: I0122 13:47:35.980987 4769 scope.go:117] "RemoveContainer" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.001581 4769 scope.go:117] "RemoveContainer" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.019631 4769 scope.go:117] "RemoveContainer" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.041100 4769 scope.go:117] "RemoveContainer" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.041895 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\": container with ID starting with d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925 not found: ID does not exist" containerID="d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.041941 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925"} err="failed to get container status \"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\": rpc error: code = NotFound desc = could not find container \"d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925\": container with ID starting with d2a80aaeadbc7b8f41caea57ff1fd3d4a0c15f99824625ed1332d0a4d39a8925 not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.041978 4769 scope.go:117] "RemoveContainer" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.042624 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\": container with ID starting with 55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c not found: ID does not exist" containerID="55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.042664 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c"} err="failed to get container status \"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\": rpc error: code = NotFound desc = could not find container \"55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c\": container with ID starting with 55086ac3837ef07e37ee3e14b46eaa485d99dd3b638e33aec9137780b63d951c not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.042848 4769 scope.go:117] "RemoveContainer" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.043828 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\": container with ID starting with 7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda not found: ID does not exist" containerID="7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.043866 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda"} err="failed to get container status \"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\": rpc error: code = NotFound desc = could not find container \"7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda\": container with ID starting with 7a6e519181a3a16284c9883d881d15686c41287993d37076bfea43dcb5d6eeda not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.043892 4769 scope.go:117] "RemoveContainer" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.044228 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\": container with ID starting with 932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45 not found: ID does not exist" containerID="932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044266 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45"} err="failed to get container status \"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\": rpc error: code = NotFound desc = could not find container \"932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45\": container with ID starting with 932b2d9f5bebade31092375a9a32842bde6cab040e2e2171aea8fb0d72a4ef45 not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044294 4769 scope.go:117] "RemoveContainer" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.044627 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\": container with ID starting with 3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d not found: ID does not exist" containerID="3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044649 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d"} err="failed to get container status \"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\": rpc error: code = NotFound desc = could not find container \"3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d\": container with ID starting with 3e1514c3bdf3989d80caffb001a076af60dcfe4c0b4d547188df7fdbb2b1aa7d not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.044663 4769 scope.go:117] "RemoveContainer" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.045014 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\": container with ID starting with a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5 not found: ID does not exist" containerID="a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.045049 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5"} err="failed to get container status \"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\": rpc error: code = NotFound desc = could not find container \"a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5\": container with ID starting with a07838ccc733c35b563f20b3e09d2fd76bf4ea33a9d6e9cc68e8ac1484f3d7a5 not found: ID does not exist" Jan 22 13:47:36 crc kubenswrapper[4769]: E0122 13:47:36.576748 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="3.2s" Jan 22 13:47:36 crc kubenswrapper[4769]: I0122 13:47:36.891361 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.861513 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.862043 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.862438 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.862758 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.863262 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:37 crc kubenswrapper[4769]: I0122 13:47:37.863713 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:39 crc kubenswrapper[4769]: E0122 13:47:39.778311 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="6.4s" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.889785 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.890351 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.895727 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:40 crc kubenswrapper[4769]: I0122 13:47:40.896453 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.557735 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" containerID="cri-o://6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" gracePeriod=15 Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.905926 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.907158 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.907715 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.908317 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.908780 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.909336 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954631 4769 generic.go:334] "Generic (PLEG): container finished" podID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" exitCode=0 Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954683 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954682 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerDied","Data":"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea"} Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954751 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" event={"ID":"e14c6636-281b-40e1-9ee8-1a08812104fd","Type":"ContainerDied","Data":"ecd96351628bb1d50b55482cf0c3518a0cdf7cafe69577c7b0d90695bd293ec5"} Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.954776 4769 scope.go:117] "RemoveContainer" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.955310 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.955864 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.956213 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.956568 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.956930 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.979933 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.980002 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.980048 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.980232 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981056 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981153 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981185 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981236 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981306 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981349 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981378 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981471 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981514 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981549 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.981572 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") pod \"e14c6636-281b-40e1-9ee8-1a08812104fd\" (UID: \"e14c6636-281b-40e1-9ee8-1a08812104fd\") " Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982070 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982261 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982289 4769 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982306 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.982314 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.983048 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.986945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.988011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.988330 4769 scope.go:117] "RemoveContainer" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" Jan 22 13:47:41 crc kubenswrapper[4769]: E0122 13:47:41.988870 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea\": container with ID starting with 6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea not found: ID does not exist" containerID="6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.988913 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea"} err="failed to get container status \"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea\": rpc error: code = NotFound desc = could not find container \"6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea\": container with ID starting with 6c1793a53b8ea260d1542d071a7c88803a7a6d2b79a3a6f7fb53e4533578a8ea not found: ID does not exist" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.990013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.990292 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.990627 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991223 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk" (OuterVolumeSpecName: "kube-api-access-zrbwk") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "kube-api-access-zrbwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991314 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:41 crc kubenswrapper[4769]: I0122 13:47:41.991841 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e14c6636-281b-40e1-9ee8-1a08812104fd" (UID: "e14c6636-281b-40e1-9ee8-1a08812104fd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083421 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083462 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbwk\" (UniqueName: \"kubernetes.io/projected/e14c6636-281b-40e1-9ee8-1a08812104fd-kube-api-access-zrbwk\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083472 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083481 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083494 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083503 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083512 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083521 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083530 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083542 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.083551 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e14c6636-281b-40e1-9ee8-1a08812104fd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.277944 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.278641 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.279121 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.279440 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:42 crc kubenswrapper[4769]: I0122 13:47:42.279898 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:45 crc kubenswrapper[4769]: E0122 13:47:45.340922 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.50:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d11aa91e7e10e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,LastTimestamp:2026-01-22 13:47:32.916478222 +0000 UTC m=+232.327588151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 13:47:46 crc kubenswrapper[4769]: E0122 13:47:46.180291 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.50:6443: connect: connection refused" interval="7s" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.989946 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.990012 4769 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47" exitCode=1 Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.990073 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47"} Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.990677 4769 scope.go:117] "RemoveContainer" containerID="83046a8dcab554309ccb822f7ea451bd90e9bbe0037f2b51061cb2943122ac47" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.991161 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.991701 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.992247 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.992635 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.993167 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:46 crc kubenswrapper[4769]: I0122 13:47:46.993668 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.883258 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.885201 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.886528 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.887166 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.887694 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.888266 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.888917 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.901740 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.901779 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:47 crc kubenswrapper[4769]: E0122 13:47:47.902316 4769 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:47 crc kubenswrapper[4769]: I0122 13:47:47.903045 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:47 crc kubenswrapper[4769]: W0122 13:47:47.932934 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9 WatchSource:0}: Error finding container d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9: Status 404 returned error can't find the container with id d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9 Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.002983 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.003149 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2b801659beb601eac2687939f669ac486437e11bf2809863d0f3c82193d625ef"} Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.004467 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.005129 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.005390 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d837367816c630dfa44940a9c515917ea8b41c6692dd58a65d5b65c00ec83cb9"} Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.005630 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.006203 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.006893 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:48 crc kubenswrapper[4769]: I0122 13:47:48.007388 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014380 4769 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dd074ee6f05dfb7f27b8b3cbfe33bc383b045772c3f61ed94ace304313aea8e0" exitCode=0 Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014608 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014659 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.014709 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dd074ee6f05dfb7f27b8b3cbfe33bc383b045772c3f61ed94ace304313aea8e0"} Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015141 4769 status_manager.go:851] "Failed to get status for pod" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" pod="openshift-marketplace/certified-operators-2ks9m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2ks9m\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: E0122 13:47:49.015309 4769 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015400 4769 status_manager.go:851] "Failed to get status for pod" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" pod="openshift-authentication/oauth-openshift-558db77b4-jtzpg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-jtzpg\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015654 4769 status_manager.go:851] "Failed to get status for pod" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" pod="openshift-marketplace/redhat-operators-9x475" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-9x475\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.015954 4769 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.016715 4769 status_manager.go:851] "Failed to get status for pod" podUID="98422033-e252-4416-9d6c-9a782f84a615" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.017011 4769 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.50:6443: connect: connection refused" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.390012 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.390145 4769 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 13:47:49 crc kubenswrapper[4769]: I0122 13:47:49.390199 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.029960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"06c0e2395c7cf93850d7fa2e4d5ed0de84ec761b207fe82e34e9161f79e1c68c"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e834f92db93d1442490f9e2de8324e3492610d235f634f1be65875b5c941b47b"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030517 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d35f3f93181017ef12da2b8dd39b76770682569c663a72985d77be2eaa6e4b28"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d46abaf6523fc5bbd161058b225fd16feb206c4ef7c1baae949da9a1d15290d"} Jan 22 13:47:50 crc kubenswrapper[4769]: I0122 13:47:50.030541 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5ef5b37f4fd5fea97b1f5419e0a1ccd7654e51ee6f955d82ccfce421fceb5aea"} Jan 22 13:47:51 crc kubenswrapper[4769]: I0122 13:47:51.035958 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:51 crc kubenswrapper[4769]: I0122 13:47:51.035994 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:51 crc kubenswrapper[4769]: I0122 13:47:51.036032 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:52 crc kubenswrapper[4769]: I0122 13:47:52.904039 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:52 crc kubenswrapper[4769]: I0122 13:47:52.904393 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:52 crc kubenswrapper[4769]: I0122 13:47:52.911658 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:54 crc kubenswrapper[4769]: I0122 13:47:54.089783 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.046017 4769 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.064938 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.065861 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.070355 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:47:56 crc kubenswrapper[4769]: I0122 13:47:56.073162 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e85410e-37b5-456b-9cd6-bd0b56e92a98" Jan 22 13:47:57 crc kubenswrapper[4769]: I0122 13:47:57.070921 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:57 crc kubenswrapper[4769]: I0122 13:47:57.070964 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:47:59 crc kubenswrapper[4769]: I0122 13:47:59.396072 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:47:59 crc kubenswrapper[4769]: I0122 13:47:59.404914 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 13:48:00 crc kubenswrapper[4769]: I0122 13:48:00.905144 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e85410e-37b5-456b-9cd6-bd0b56e92a98" Jan 22 13:48:05 crc kubenswrapper[4769]: I0122 13:48:05.514558 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.044106 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.354913 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.583962 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.777709 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 13:48:06 crc kubenswrapper[4769]: I0122 13:48:06.881804 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.054759 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.286557 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.321695 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.371564 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.476916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.576708 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.594384 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.623130 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.744562 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.752202 4769 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:07 crc kubenswrapper[4769]: I0122 13:48:07.769278 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.188524 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.370820 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.428845 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.581424 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.618500 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.644665 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.695982 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 13:48:08 crc kubenswrapper[4769]: I0122 13:48:08.707764 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.210763 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.376483 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.410247 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.422506 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.440937 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.449408 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.520729 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.553048 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.587290 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.732303 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.739288 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.745452 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.783336 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.796580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 13:48:09 crc kubenswrapper[4769]: I0122 13:48:09.998457 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.052173 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.125999 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.239968 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.257116 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.277317 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.396345 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.408181 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.494681 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.513276 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.562843 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.620008 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.622240 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.671596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.826866 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.921683 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.923687 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.930847 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.931655 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 13:48:10 crc kubenswrapper[4769]: I0122 13:48:10.999725 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.011454 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.052760 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.070665 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.092784 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.132247 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.145036 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.178285 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.253427 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.276497 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.329836 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.343020 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.469113 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.476426 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.476750 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.614236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.654596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.676554 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.685913 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.694581 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.728965 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.868277 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.929843 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.936097 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.948364 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 13:48:11 crc kubenswrapper[4769]: I0122 13:48:11.949895 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.007756 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.149389 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.203991 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.218628 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.227001 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.289301 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.306273 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.491734 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.499486 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.519556 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.573662 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.657983 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.698831 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.738511 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.832237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 13:48:12 crc kubenswrapper[4769]: I0122 13:48:12.940919 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.176553 4769 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.215568 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.377904 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.609118 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.650064 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.711559 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.722508 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.756474 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.836132 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 13:48:13 crc kubenswrapper[4769]: I0122 13:48:13.940891 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.010851 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.019905 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.085076 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.146634 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.192587 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.225959 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.227529 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.466171 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.677425 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.682089 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.733974 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.783828 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.870927 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.912364 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.929026 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 13:48:14 crc kubenswrapper[4769]: I0122 13:48:14.955197 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.033970 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.077450 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.114783 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.146531 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.166687 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.325941 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.365818 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.419921 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.511450 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.555989 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.556868 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.600454 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.615568 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.634828 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.741409 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.757156 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.783372 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 13:48:15 crc kubenswrapper[4769]: I0122 13:48:15.860422 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.037124 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.067752 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.119777 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.143289 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.152948 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.157386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.185154 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.269390 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.287313 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.293732 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.448677 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.463142 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.483219 4769 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.542105 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.552644 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.621620 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.665600 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.675160 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.746155 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.777701 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.792287 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 13:48:16 crc kubenswrapper[4769]: I0122 13:48:16.893958 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.062527 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.183219 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.226675 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.237396 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.386089 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.392446 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.434746 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.475217 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.484772 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.507170 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.566401 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.577252 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.756837 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.777367 4769 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.777774 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.777761215 podStartE2EDuration="45.777761215s" podCreationTimestamp="2026-01-22 13:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:47:56.001269999 +0000 UTC m=+255.412379948" watchObservedRunningTime="2026-01-22 13:48:17.777761215 +0000 UTC m=+277.188871134" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781262 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ks9m","openshift-authentication/oauth-openshift-558db77b4-jtzpg","openshift-marketplace/redhat-operators-9x475","openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781318 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-76766fc778-rq7bp"] Jan 22 13:48:17 crc kubenswrapper[4769]: E0122 13:48:17.781470 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98422033-e252-4416-9d6c-9a782f84a615" containerName="installer" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781481 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98422033-e252-4416-9d6c-9a782f84a615" containerName="installer" Jan 22 13:48:17 crc kubenswrapper[4769]: E0122 13:48:17.781492 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781499 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781622 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="98422033-e252-4416-9d6c-9a782f84a615" containerName="installer" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781637 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" containerName="oauth-openshift" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781817 4769 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781841 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4d5e43a9-5dd9-470e-a3e1-65be2c0003c4" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.781984 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.786053 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.786237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787340 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787535 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787583 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787755 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.787758 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.788112 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.788367 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.788382 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.791201 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.791375 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.793871 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.799918 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.804888 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.815090 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.838371 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.838353222 podStartE2EDuration="21.838353222s" podCreationTimestamp="2026-01-22 13:47:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:17.833842718 +0000 UTC m=+277.244952657" watchObservedRunningTime="2026-01-22 13:48:17.838353222 +0000 UTC m=+277.249463151" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.912745 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.934476 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-policies\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939375 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-dir\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24jsk\" (UniqueName: \"kubernetes.io/projected/d080b88c-ba18-4f18-b1f7-dee04d9c731b-kube-api-access-24jsk\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.939939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940005 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940145 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:17 crc kubenswrapper[4769]: I0122 13:48:17.940174 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041258 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041299 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-policies\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041444 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041473 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-dir\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041503 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24jsk\" (UniqueName: \"kubernetes.io/projected/d080b88c-ba18-4f18-b1f7-dee04d9c731b-kube-api-access-24jsk\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041562 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041605 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.041681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-dir\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.042265 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.043380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-audit-policies\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.043565 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.043723 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.044137 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047107 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047263 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047322 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047741 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.047770 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.048309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.049409 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d080b88c-ba18-4f18-b1f7-dee04d9c731b-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.063096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24jsk\" (UniqueName: \"kubernetes.io/projected/d080b88c-ba18-4f18-b1f7-dee04d9c731b-kube-api-access-24jsk\") pod \"oauth-openshift-76766fc778-rq7bp\" (UID: \"d080b88c-ba18-4f18-b1f7-dee04d9c731b\") " pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.106010 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.194277 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.206004 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.236996 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.245497 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.246238 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.263723 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.275124 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.344546 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.392103 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.427399 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.466201 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.503092 4769 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.543500 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.560887 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.586848 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-rq7bp"] Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.588051 4769 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.588327 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" gracePeriod=5 Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.737926 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.764497 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.891078 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143027dc-ac6a-442f-bf57-3dcd7efd0427" path="/var/lib/kubelet/pods/143027dc-ac6a-442f-bf57-3dcd7efd0427/volumes" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.892190 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc744951-0370-42be-a1c0-e639d8d8cd31" path="/var/lib/kubelet/pods/bc744951-0370-42be-a1c0-e639d8d8cd31/volumes" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.893146 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e14c6636-281b-40e1-9ee8-1a08812104fd" path="/var/lib/kubelet/pods/e14c6636-281b-40e1-9ee8-1a08812104fd/volumes" Jan 22 13:48:18 crc kubenswrapper[4769]: I0122 13:48:18.935308 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.100272 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.101978 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.110674 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-rq7bp"] Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.194212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" event={"ID":"d080b88c-ba18-4f18-b1f7-dee04d9c731b","Type":"ContainerStarted","Data":"6ac3b47bfb0905d5ab4a329814698e0d8548b8991480a98f770ced3de9a6fea7"} Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.215978 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.269845 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.292239 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.325833 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.342769 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.451952 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.485926 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.495954 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.531484 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.536303 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.540965 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.588750 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.589839 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.618727 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.729483 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.819984 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.857959 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.965052 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 13:48:19 crc kubenswrapper[4769]: I0122 13:48:19.996990 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.137837 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.202509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" event={"ID":"d080b88c-ba18-4f18-b1f7-dee04d9c731b","Type":"ContainerStarted","Data":"4ac294b6ce1d87033264d3df3bfee6768956d8cfbcae1f8206e26e33cb2622b5"} Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.202860 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.208251 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.211319 4769 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.241042 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76766fc778-rq7bp" podStartSLOduration=64.241014556 podStartE2EDuration="1m4.241014556s" podCreationTimestamp="2026-01-22 13:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:20.236027019 +0000 UTC m=+279.647136988" watchObservedRunningTime="2026-01-22 13:48:20.241014556 +0000 UTC m=+279.652124515" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.692657 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.714106 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.954263 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:20 crc kubenswrapper[4769]: I0122 13:48:20.987759 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.208308 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.401528 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.484053 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 13:48:21 crc kubenswrapper[4769]: I0122 13:48:21.636519 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.175366 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.175838 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233616 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233678 4769 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" exitCode=137 Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233731 4769 scope.go:117] "RemoveContainer" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.233785 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236468 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236535 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236583 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236597 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236615 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236632 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236624 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.236655 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237071 4769 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237085 4769 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237094 4769 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.237101 4769 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.244440 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.254418 4769 scope.go:117] "RemoveContainer" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" Jan 22 13:48:24 crc kubenswrapper[4769]: E0122 13:48:24.254951 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce\": container with ID starting with 994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce not found: ID does not exist" containerID="994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.254989 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce"} err="failed to get container status \"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce\": rpc error: code = NotFound desc = could not find container \"994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce\": container with ID starting with 994803c634fe2140b142aa5aa7b24de248a614ae29172583b7926ab74e3de4ce not found: ID does not exist" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.338450 4769 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.891211 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.891592 4769 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.904089 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.904164 4769 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="611b7afc-b813-48f7-80c8-7cec2c2a5711" Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.908580 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 13:48:24 crc kubenswrapper[4769]: I0122 13:48:24.908625 4769 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="611b7afc-b813-48f7-80c8-7cec2c2a5711" Jan 22 13:48:35 crc kubenswrapper[4769]: I0122 13:48:35.302005 4769 generic.go:334] "Generic (PLEG): container finished" podID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerID="63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90" exitCode=0 Jan 22 13:48:35 crc kubenswrapper[4769]: I0122 13:48:35.302126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerDied","Data":"63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90"} Jan 22 13:48:35 crc kubenswrapper[4769]: I0122 13:48:35.303169 4769 scope.go:117] "RemoveContainer" containerID="63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90" Jan 22 13:48:36 crc kubenswrapper[4769]: I0122 13:48:36.310123 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerStarted","Data":"e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4"} Jan 22 13:48:36 crc kubenswrapper[4769]: I0122 13:48:36.311840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:48:36 crc kubenswrapper[4769]: I0122 13:48:36.313220 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:48:38 crc kubenswrapper[4769]: I0122 13:48:38.493401 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 13:48:40 crc kubenswrapper[4769]: I0122 13:48:40.742843 4769 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.218737 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.423861 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.424123 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" containerID="cri-o://ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" gracePeriod=30 Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.523763 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:48:41 crc kubenswrapper[4769]: I0122 13:48:41.524056 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" containerID="cri-o://2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210" gracePeriod=30 Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.354674 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356324 4769 generic.go:334] "Generic (PLEG): container finished" podID="88755d81-da75-40b3-97c4-224eaad0eca2" containerID="2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210" exitCode=0 Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerDied","Data":"2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" event={"ID":"88755d81-da75-40b3-97c4-224eaad0eca2","Type":"ContainerDied","Data":"8a4ca8e6f7f24168e7b28e169244f2171fb54980af290f9158d1ed973b3b78f4"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.356889 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4ca8e6f7f24168e7b28e169244f2171fb54980af290f9158d1ed973b3b78f4" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.363990 4769 generic.go:334] "Generic (PLEG): container finished" podID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" exitCode=0 Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364024 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerDied","Data":"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364052 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" event={"ID":"2b0fa7ff-24c4-431c-bc35-87f9483d5c70","Type":"ContainerDied","Data":"99824953bd8e0a8c9f25b06e40921ab235122e7afd37d061ee57a611b654dd94"} Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364074 4769 scope.go:117] "RemoveContainer" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.364073 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k5psf" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.365598 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.384432 4769 scope.go:117] "RemoveContainer" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.384831 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b\": container with ID starting with ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b not found: ID does not exist" containerID="ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.384870 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b"} err="failed to get container status \"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b\": rpc error: code = NotFound desc = could not find container \"ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b\": container with ID starting with ee7c2bbb114ddbe83948948a75500f8669adfebad9df9dbd0ee86c53a656337b not found: ID does not exist" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.450307 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.450644 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451035 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451115 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.451891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") pod \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\" (UID: \"2b0fa7ff-24c4-431c-bc35-87f9483d5c70\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.453388 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config" (OuterVolumeSpecName: "config") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.456537 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459651 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459703 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459717 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.459728 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.475107 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g" (OuterVolumeSpecName: "kube-api-access-sln4g") pod "2b0fa7ff-24c4-431c-bc35-87f9483d5c70" (UID: "2b0fa7ff-24c4-431c-bc35-87f9483d5c70"). InnerVolumeSpecName "kube-api-access-sln4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560197 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560279 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560301 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560335 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") pod \"88755d81-da75-40b3-97c4-224eaad0eca2\" (UID: \"88755d81-da75-40b3-97c4-224eaad0eca2\") " Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.560532 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sln4g\" (UniqueName: \"kubernetes.io/projected/2b0fa7ff-24c4-431c-bc35-87f9483d5c70-kube-api-access-sln4g\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.561285 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca" (OuterVolumeSpecName: "client-ca") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.561353 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config" (OuterVolumeSpecName: "config") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.568812 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.569981 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc" (OuterVolumeSpecName: "kube-api-access-qxfjc") pod "88755d81-da75-40b3-97c4-224eaad0eca2" (UID: "88755d81-da75-40b3-97c4-224eaad0eca2"). InnerVolumeSpecName "kube-api-access-qxfjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.657965 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.658215 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658237 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.658250 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658257 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.658273 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658282 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658406 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" containerName="route-controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658425 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" containerName="controller-manager" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658438 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.658911 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661548 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661609 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88755d81-da75-40b3-97c4-224eaad0eca2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661624 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxfjc\" (UniqueName: \"kubernetes.io/projected/88755d81-da75-40b3-97c4-224eaad0eca2-kube-api-access-qxfjc\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.661637 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88755d81-da75-40b3-97c4-224eaad0eca2-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.666648 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.667442 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.675498 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.682168 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.711911 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.715688 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k5psf"] Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.748768 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.749207 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-xjtqx proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" podUID="74671cae-8e7e-40b3-8137-2b54a4032b26" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.755320 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:42 crc kubenswrapper[4769]: E0122 13:48:42.755766 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-9mgb2 serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" podUID="81a6e8a2-199d-482d-98bc-0f2f16383d4e" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763194 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763286 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763304 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763327 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763371 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.763467 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864859 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864888 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864915 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864931 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864947 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864966 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.864987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.865740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.866050 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.866205 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.867016 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.867516 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.872595 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.876344 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.879873 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"route-controller-manager-7b57bf8468-2j2r6\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.884164 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"controller-manager-7d9c9df784-dt6l9\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:42 crc kubenswrapper[4769]: I0122 13:48:42.888987 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0fa7ff-24c4-431c-bc35-87f9483d5c70" path="/var/lib/kubelet/pods/2b0fa7ff-24c4-431c-bc35-87f9483d5c70/volumes" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.371633 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.371672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.371854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.381633 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.388359 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.400096 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.404409 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8qp45"] Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573359 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573429 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573503 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573562 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573590 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573620 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") pod \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\" (UID: \"81a6e8a2-199d-482d-98bc-0f2f16383d4e\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573649 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573687 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.573730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") pod \"74671cae-8e7e-40b3-8137-2b54a4032b26\" (UID: \"74671cae-8e7e-40b3-8137-2b54a4032b26\") " Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574593 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca" (OuterVolumeSpecName: "client-ca") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574597 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca" (OuterVolumeSpecName: "client-ca") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.574853 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config" (OuterVolumeSpecName: "config") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.575308 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config" (OuterVolumeSpecName: "config") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.578972 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.580342 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2" (OuterVolumeSpecName: "kube-api-access-9mgb2") pod "81a6e8a2-199d-482d-98bc-0f2f16383d4e" (UID: "81a6e8a2-199d-482d-98bc-0f2f16383d4e"). InnerVolumeSpecName "kube-api-access-9mgb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.580887 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.581157 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx" (OuterVolumeSpecName: "kube-api-access-xjtqx") pod "74671cae-8e7e-40b3-8137-2b54a4032b26" (UID: "74671cae-8e7e-40b3-8137-2b54a4032b26"). InnerVolumeSpecName "kube-api-access-xjtqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675214 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675247 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675255 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81a6e8a2-199d-482d-98bc-0f2f16383d4e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675264 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81a6e8a2-199d-482d-98bc-0f2f16383d4e-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675275 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675285 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74671cae-8e7e-40b3-8137-2b54a4032b26-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675297 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74671cae-8e7e-40b3-8137-2b54a4032b26-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675309 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjtqx\" (UniqueName: \"kubernetes.io/projected/74671cae-8e7e-40b3-8137-2b54a4032b26-kube-api-access-xjtqx\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.675320 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mgb2\" (UniqueName: \"kubernetes.io/projected/81a6e8a2-199d-482d-98bc-0f2f16383d4e-kube-api-access-9mgb2\") on node \"crc\" DevicePath \"\"" Jan 22 13:48:43 crc kubenswrapper[4769]: I0122 13:48:43.930653 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.379218 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.379295 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-dt6l9" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.426865 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.427815 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.429709 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.429737 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.429757 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.430082 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.430313 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.432361 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.432691 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-2j2r6"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.432505 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.445180 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.467853 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.479862 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-dt6l9"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588384 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.588421 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689198 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689275 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.689350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.690460 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.690946 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.696396 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.715506 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"route-controller-manager-9db9fd7fb-fmp74\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.740371 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.891109 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74671cae-8e7e-40b3-8137-2b54a4032b26" path="/var/lib/kubelet/pods/74671cae-8e7e-40b3-8137-2b54a4032b26/volumes" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.891874 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a6e8a2-199d-482d-98bc-0f2f16383d4e" path="/var/lib/kubelet/pods/81a6e8a2-199d-482d-98bc-0f2f16383d4e/volumes" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.892226 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88755d81-da75-40b3-97c4-224eaad0eca2" path="/var/lib/kubelet/pods/88755d81-da75-40b3-97c4-224eaad0eca2/volumes" Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.922054 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:48:44 crc kubenswrapper[4769]: I0122 13:48:44.990512 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.385371 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerStarted","Data":"3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3"} Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.385605 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.385616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerStarted","Data":"6cc1e5e19564d09af54c555b766313a9b3a7cbbeabd3df7a270e34fcad39380a"} Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.389995 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:48:45 crc kubenswrapper[4769]: I0122 13:48:45.402173 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" podStartSLOduration=3.402153341 podStartE2EDuration="3.402153341s" podCreationTimestamp="2026-01-22 13:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:45.399811396 +0000 UTC m=+304.810921345" watchObservedRunningTime="2026-01-22 13:48:45.402153341 +0000 UTC m=+304.813263270" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.416893 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.417847 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.421637 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.421769 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.421643 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.422557 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.422827 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.426986 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.435356 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.435736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521319 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521412 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521436 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.521480 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623596 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623670 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623696 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.623748 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.625636 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.625642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.626572 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.631165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.642211 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"controller-manager-5d8d8f6646-fl7vl\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.737553 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:47 crc kubenswrapper[4769]: I0122 13:48:47.924919 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:48:47 crc kubenswrapper[4769]: W0122 13:48:47.947015 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod016c4fa8_4f5f_4864_bd36_07b09ce79d08.slice/crio-c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046 WatchSource:0}: Error finding container c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046: Status 404 returned error can't find the container with id c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046 Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.402423 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerStarted","Data":"ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b"} Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.402462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerStarted","Data":"c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046"} Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.402816 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.408366 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:48:48 crc kubenswrapper[4769]: I0122 13:48:48.452187 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" podStartSLOduration=6.452164198 podStartE2EDuration="6.452164198s" podCreationTimestamp="2026-01-22 13:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:48:48.427335285 +0000 UTC m=+307.838445214" watchObservedRunningTime="2026-01-22 13:48:48.452164198 +0000 UTC m=+307.863274127" Jan 22 13:48:51 crc kubenswrapper[4769]: I0122 13:48:51.649394 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 13:48:51 crc kubenswrapper[4769]: I0122 13:48:51.659485 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 13:48:58 crc kubenswrapper[4769]: I0122 13:48:58.541197 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 13:49:01 crc kubenswrapper[4769]: I0122 13:49:01.700960 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:49:01 crc kubenswrapper[4769]: I0122 13:49:01.702420 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" containerID="cri-o://ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b" gracePeriod=30 Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.490972 4769 generic.go:334] "Generic (PLEG): container finished" podID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerID="ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b" exitCode=0 Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.491212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerDied","Data":"ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b"} Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.716581 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816561 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816813 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816870 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.816965 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") pod \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\" (UID: \"016c4fa8-4f5f-4864-bd36-07b09ce79d08\") " Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.817569 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca" (OuterVolumeSpecName: "client-ca") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.817598 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config" (OuterVolumeSpecName: "config") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.817583 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.822910 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.822927 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4" (OuterVolumeSpecName: "kube-api-access-4fmq4") pod "016c4fa8-4f5f-4864-bd36-07b09ce79d08" (UID: "016c4fa8-4f5f-4864-bd36-07b09ce79d08"). InnerVolumeSpecName "kube-api-access-4fmq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918412 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918457 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918472 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fmq4\" (UniqueName: \"kubernetes.io/projected/016c4fa8-4f5f-4864-bd36-07b09ce79d08-kube-api-access-4fmq4\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918484 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/016c4fa8-4f5f-4864-bd36-07b09ce79d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:02 crc kubenswrapper[4769]: I0122 13:49:02.918495 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/016c4fa8-4f5f-4864-bd36-07b09ce79d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.430408 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-zfk7f"] Jan 22 13:49:03 crc kubenswrapper[4769]: E0122 13:49:03.430900 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.430913 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.431001 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" containerName="controller-manager" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.431355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.440457 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-zfk7f"] Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.498426 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" event={"ID":"016c4fa8-4f5f-4864-bd36-07b09ce79d08","Type":"ContainerDied","Data":"c1e00b0365e3cf1966a9be207e6d39bc0ea5aa704d87365d6b58123e70795046"} Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.498475 4769 scope.go:117] "RemoveContainer" containerID="ab7030d019c42ab8878671b18634cf3d42d459fb4aa35caf3cd6c916cef00a9b" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.498522 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.523139 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mkv6\" (UniqueName: \"kubernetes.io/projected/7e370c3a-a358-4548-bb11-7780ee6ef6b8-kube-api-access-4mkv6\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526568 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-config\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-client-ca\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526785 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e370c3a-a358-4548-bb11-7780ee6ef6b8-serving-cert\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.526918 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.529966 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d8d8f6646-fl7vl"] Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628673 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mkv6\" (UniqueName: \"kubernetes.io/projected/7e370c3a-a358-4548-bb11-7780ee6ef6b8-kube-api-access-4mkv6\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628735 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-config\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628771 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-client-ca\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628804 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e370c3a-a358-4548-bb11-7780ee6ef6b8-serving-cert\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.628871 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.630357 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-proxy-ca-bundles\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.630476 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-client-ca\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.630762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e370c3a-a358-4548-bb11-7780ee6ef6b8-config\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.633192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e370c3a-a358-4548-bb11-7780ee6ef6b8-serving-cert\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.649532 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mkv6\" (UniqueName: \"kubernetes.io/projected/7e370c3a-a358-4548-bb11-7780ee6ef6b8-kube-api-access-4mkv6\") pod \"controller-manager-7d9c9df784-zfk7f\" (UID: \"7e370c3a-a358-4548-bb11-7780ee6ef6b8\") " pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:03 crc kubenswrapper[4769]: I0122 13:49:03.744949 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.124126 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d9c9df784-zfk7f"] Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.505251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" event={"ID":"7e370c3a-a358-4548-bb11-7780ee6ef6b8","Type":"ContainerStarted","Data":"8ea0cad14a4a41f18b0d4d0852fd4e923c49a749d882170f2419c11a8b351992"} Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.505623 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.505636 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" event={"ID":"7e370c3a-a358-4548-bb11-7780ee6ef6b8","Type":"ContainerStarted","Data":"28f72b117ed18a5edb4a3d77a06e43c8efcb869efe58ee963c246653f12abbc1"} Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.513329 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.526834 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d9c9df784-zfk7f" podStartSLOduration=3.526814727 podStartE2EDuration="3.526814727s" podCreationTimestamp="2026-01-22 13:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:04.521640635 +0000 UTC m=+323.932750574" watchObservedRunningTime="2026-01-22 13:49:04.526814727 +0000 UTC m=+323.937924676" Jan 22 13:49:04 crc kubenswrapper[4769]: I0122 13:49:04.889548 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016c4fa8-4f5f-4864-bd36-07b09ce79d08" path="/var/lib/kubelet/pods/016c4fa8-4f5f-4864-bd36-07b09ce79d08/volumes" Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.963719 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.964589 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7wh4n" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" containerID="cri-o://b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" gracePeriod=30 Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.978174 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.978448 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxbp4" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" containerID="cri-o://40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1" gracePeriod=30 Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.996531 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:49:09 crc kubenswrapper[4769]: I0122 13:49:09.997256 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" containerID="cri-o://e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4" gracePeriod=30 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.002263 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.002539 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8jk5" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" containerID="cri-o://2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893" gracePeriod=30 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.005676 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.005945 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2w22" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" containerID="cri-o://d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b" gracePeriod=30 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.009419 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7vfmb"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.010440 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.026736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7vfmb"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.115431 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.115493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.115531 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nkq\" (UniqueName: \"kubernetes.io/projected/1cfacd8e-cbec-4f68-b90c-ede3a679e454-kube-api-access-95nkq\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.217556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.217639 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.217667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95nkq\" (UniqueName: \"kubernetes.io/projected/1cfacd8e-cbec-4f68-b90c-ede3a679e454-kube-api-access-95nkq\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.218940 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.232505 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1cfacd8e-cbec-4f68-b90c-ede3a679e454-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.233958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95nkq\" (UniqueName: \"kubernetes.io/projected/1cfacd8e-cbec-4f68-b90c-ede3a679e454-kube-api-access-95nkq\") pod \"marketplace-operator-79b997595-7vfmb\" (UID: \"1cfacd8e-cbec-4f68-b90c-ede3a679e454\") " pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.327113 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.496356 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543036 4769 generic.go:334] "Generic (PLEG): container finished" podID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543118 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543165 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wh4n" event={"ID":"4f403243-0359-478d-a3a6-29a8f0bc29e2","Type":"ContainerDied","Data":"b542c5dbcb707bb656b636afb6aa1bcc3a67f0090bf88281e297bd475aa9bd3f"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543190 4769 scope.go:117] "RemoveContainer" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.543350 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wh4n" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.547391 4769 generic.go:334] "Generic (PLEG): container finished" podID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerID="d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.547566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.550262 4769 generic.go:334] "Generic (PLEG): container finished" podID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerID="e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.550390 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerDied","Data":"e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.553963 4769 generic.go:334] "Generic (PLEG): container finished" podID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerID="2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.554035 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.556293 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerID="40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1" exitCode=0 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.556330 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1"} Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.577072 4769 scope.go:117] "RemoveContainer" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.612125 4769 scope.go:117] "RemoveContainer" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.626072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") pod \"4f403243-0359-478d-a3a6-29a8f0bc29e2\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.626180 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") pod \"4f403243-0359-478d-a3a6-29a8f0bc29e2\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.626236 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") pod \"4f403243-0359-478d-a3a6-29a8f0bc29e2\" (UID: \"4f403243-0359-478d-a3a6-29a8f0bc29e2\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.634862 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities" (OuterVolumeSpecName: "utilities") pod "4f403243-0359-478d-a3a6-29a8f0bc29e2" (UID: "4f403243-0359-478d-a3a6-29a8f0bc29e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.637911 4769 scope.go:117] "RemoveContainer" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.638358 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc" (OuterVolumeSpecName: "kube-api-access-xx5tc") pod "4f403243-0359-478d-a3a6-29a8f0bc29e2" (UID: "4f403243-0359-478d-a3a6-29a8f0bc29e2"). InnerVolumeSpecName "kube-api-access-xx5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: E0122 13:49:10.639301 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259\": container with ID starting with b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259 not found: ID does not exist" containerID="b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.639361 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259"} err="failed to get container status \"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259\": rpc error: code = NotFound desc = could not find container \"b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259\": container with ID starting with b88e53f360c79b642215822aa458c85cddfb527d712a2e23409b20d9d691b259 not found: ID does not exist" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.639392 4769 scope.go:117] "RemoveContainer" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" Jan 22 13:49:10 crc kubenswrapper[4769]: E0122 13:49:10.640334 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6\": container with ID starting with c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6 not found: ID does not exist" containerID="c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.640358 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6"} err="failed to get container status \"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6\": rpc error: code = NotFound desc = could not find container \"c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6\": container with ID starting with c32df72a8ee39ee0d3f1c526bf4f6f62cee45d6cd2f6eccfd82a50af54dc18b6 not found: ID does not exist" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.640376 4769 scope.go:117] "RemoveContainer" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" Jan 22 13:49:10 crc kubenswrapper[4769]: E0122 13:49:10.641053 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd\": container with ID starting with 4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd not found: ID does not exist" containerID="4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.641086 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd"} err="failed to get container status \"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd\": rpc error: code = NotFound desc = could not find container \"4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd\": container with ID starting with 4c144c7583b39f46ce262d7733d67ac1e5ba5328388a3f5612a2fae5ceb8a4dd not found: ID does not exist" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.641099 4769 scope.go:117] "RemoveContainer" containerID="63ce7caf2f29fa4c750335f093e515944a1c8003ddf040ccfa68087863d13e90" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.694756 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f403243-0359-478d-a3a6-29a8f0bc29e2" (UID: "4f403243-0359-478d-a3a6-29a8f0bc29e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.727248 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.727269 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx5tc\" (UniqueName: \"kubernetes.io/projected/4f403243-0359-478d-a3a6-29a8f0bc29e2-kube-api-access-xx5tc\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.727281 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f403243-0359-478d-a3a6-29a8f0bc29e2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.728972 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.735041 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.745644 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.748770 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.776103 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7vfmb"] Jan 22 13:49:10 crc kubenswrapper[4769]: W0122 13:49:10.782509 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cfacd8e_cbec_4f68_b90c_ede3a679e454.slice/crio-028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971 WatchSource:0}: Error finding container 028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971: Status 404 returned error can't find the container with id 028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971 Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.829008 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") pod \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.829131 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") pod \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.829235 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") pod \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\" (UID: \"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.830565 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities" (OuterVolumeSpecName: "utilities") pod "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" (UID: "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.833013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck" (OuterVolumeSpecName: "kube-api-access-qkpck") pod "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" (UID: "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a"). InnerVolumeSpecName "kube-api-access-qkpck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.876955 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.881629 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7wh4n"] Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.893189 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" path="/var/lib/kubelet/pods/4f403243-0359-478d-a3a6-29a8f0bc29e2/volumes" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930633 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") pod \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930678 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") pod \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930704 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") pod \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") pod \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930766 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") pod \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930825 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") pod \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\" (UID: \"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930845 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") pod \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930870 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") pod \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\" (UID: \"7d9e80ce-c46e-4a99-814e-0d9b1b65623f\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.930893 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") pod \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\" (UID: \"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85\") " Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.931105 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.931117 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkpck\" (UniqueName: \"kubernetes.io/projected/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-kube-api-access-qkpck\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.931729 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities" (OuterVolumeSpecName: "utilities") pod "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" (UID: "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.933293 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities" (OuterVolumeSpecName: "utilities") pod "7d9e80ce-c46e-4a99-814e-0d9b1b65623f" (UID: "7d9e80ce-c46e-4a99-814e-0d9b1b65623f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.933647 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" (UID: "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.936133 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" (UID: "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.938103 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf" (OuterVolumeSpecName: "kube-api-access-x86gf") pod "7d9e80ce-c46e-4a99-814e-0d9b1b65623f" (UID: "7d9e80ce-c46e-4a99-814e-0d9b1b65623f"). InnerVolumeSpecName "kube-api-access-x86gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.944570 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw" (OuterVolumeSpecName: "kube-api-access-dm4mw") pod "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" (UID: "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85"). InnerVolumeSpecName "kube-api-access-dm4mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.945058 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq" (OuterVolumeSpecName: "kube-api-access-vxdbq") pod "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" (UID: "dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae"). InnerVolumeSpecName "kube-api-access-vxdbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.962364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" (UID: "652c2c5a-f885-4bf3-a4f8-73a4717f6a3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.981527 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" (UID: "98dd81ac-1a92-4d5a-9e09-bcc49ac33a85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:10 crc kubenswrapper[4769]: I0122 13:49:10.997729 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d9e80ce-c46e-4a99-814e-0d9b1b65623f" (UID: "7d9e80ce-c46e-4a99-814e-0d9b1b65623f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032089 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxdbq\" (UniqueName: \"kubernetes.io/projected/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-kube-api-access-vxdbq\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032151 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm4mw\" (UniqueName: \"kubernetes.io/projected/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-kube-api-access-dm4mw\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032169 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032181 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032194 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032207 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x86gf\" (UniqueName: \"kubernetes.io/projected/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-kube-api-access-x86gf\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032218 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032231 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032242 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.032252 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9e80ce-c46e-4a99-814e-0d9b1b65623f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.563812 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" event={"ID":"dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae","Type":"ContainerDied","Data":"c437a788f729ec1c74235c0c86ed4e15424a790ae709346c3620566dfd2a5bb2"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.563884 4769 scope.go:117] "RemoveContainer" containerID="e34ed27b31ae8964c9182b8aa629d506dd39a530839a18c60e8a9d7b09eba8d4" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.563898 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5jwbt" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.570454 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8jk5" event={"ID":"98dd81ac-1a92-4d5a-9e09-bcc49ac33a85","Type":"ContainerDied","Data":"6e66e2dbf8bc8a080c55b13a7260516fe1212a4c0154bcf230d5878c8ebeeeed"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.570482 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8jk5" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.577204 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxbp4" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.577383 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxbp4" event={"ID":"7d9e80ce-c46e-4a99-814e-0d9b1b65623f","Type":"ContainerDied","Data":"87dc0ac39542afbc65ec3e6d0bdb93cd67aa154947a205f465b24220379804bc"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.582150 4769 scope.go:117] "RemoveContainer" containerID="2531649194d6834a01b61908b7793b00e8109633abda7d5a02d5eb68f320b893" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.591498 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2w22" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.592018 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2w22" event={"ID":"652c2c5a-f885-4bf3-a4f8-73a4717f6a3a","Type":"ContainerDied","Data":"ab73ea8d8d9a566fef3480c2969fb2296deb50f4ddfdc8ecead203c9dda4e719"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.595443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" event={"ID":"1cfacd8e-cbec-4f68-b90c-ede3a679e454","Type":"ContainerStarted","Data":"6d0480232009b5f6edcca36dcb41700dfaa70a49bb5305e36bb6a17d2e374b50"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.595503 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" event={"ID":"1cfacd8e-cbec-4f68-b90c-ede3a679e454","Type":"ContainerStarted","Data":"028babb366ca965535d727a422b2a74df211c727abb58a4b5897663ebebca971"} Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.595847 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.596976 4769 scope.go:117] "RemoveContainer" containerID="19f11c0236c241f234013da4669e8dd67b3f4430afe2db85d03abaaa7cb48e7c" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.599388 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.612515 4769 scope.go:117] "RemoveContainer" containerID="bd94526c2545e7d42d2caa419fef7b4eaae03cecfaac7722e27dfd4ed49fa03a" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.623270 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7vfmb" podStartSLOduration=2.623250703 podStartE2EDuration="2.623250703s" podCreationTimestamp="2026-01-22 13:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:11.618742882 +0000 UTC m=+331.029852811" watchObservedRunningTime="2026-01-22 13:49:11.623250703 +0000 UTC m=+331.034360632" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.637289 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.637340 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5jwbt"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.642929 4769 scope.go:117] "RemoveContainer" containerID="40c54e06453c65c374b60fc978fde1151fc81cdd83905f6d1eab45b8f04a0be1" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.673549 4769 scope.go:117] "RemoveContainer" containerID="0b4e548d90afb445385c5445511aa7202d16841342834b94c99673ef067eba6b" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.674233 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.679996 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8jk5"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.693899 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.696776 4769 scope.go:117] "RemoveContainer" containerID="f32dd634065691a644d2461a7fae6aa8b2a0092557591202f1589d051602d962" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.701519 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2w22"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.706697 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.710941 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxbp4"] Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.711105 4769 scope.go:117] "RemoveContainer" containerID="d825a6e9070be650270f2a51743038dd26cc2e4afe06ccff5aa90cefb1c29a2b" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.728703 4769 scope.go:117] "RemoveContainer" containerID="fa803241b9a5ea5819645ac5f5279180cdfd0cd95f936430c68e37095716dc0b" Jan 22 13:49:11 crc kubenswrapper[4769]: I0122 13:49:11.743441 4769 scope.go:117] "RemoveContainer" containerID="5773768bc9993d556325ab6b5012f24996ced11ddc55ad2bd215bb338220f42b" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.889760 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" path="/var/lib/kubelet/pods/652c2c5a-f885-4bf3-a4f8-73a4717f6a3a/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.890772 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" path="/var/lib/kubelet/pods/7d9e80ce-c46e-4a99-814e-0d9b1b65623f/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.891467 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" path="/var/lib/kubelet/pods/98dd81ac-1a92-4d5a-9e09-bcc49ac33a85/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.892558 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" path="/var/lib/kubelet/pods/dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae/volumes" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.976667 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dtrsx"] Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979078 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979116 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979142 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979156 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979174 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979187 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979207 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979219 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979234 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979247 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979267 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979281 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979299 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979311 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979327 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979339 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="extract-utilities" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979356 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979368 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979382 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979395 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979413 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979427 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979446 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979457 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979474 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979486 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="extract-content" Jan 22 13:49:12 crc kubenswrapper[4769]: E0122 13:49:12.979499 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979510 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979691 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979713 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9e80ce-c46e-4a99-814e-0d9b1b65623f" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979728 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="652c2c5a-f885-4bf3-a4f8-73a4717f6a3a" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979749 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dd81ac-1a92-4d5a-9e09-bcc49ac33a85" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.979766 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f403243-0359-478d-a3a6-29a8f0bc29e2" containerName="registry-server" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.980095 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd5e0d5a-980e-4ea6-92f3-be72bfe7b9ae" containerName="marketplace-operator" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.981079 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.985333 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtrsx"] Jan 22 13:49:12 crc kubenswrapper[4769]: I0122 13:49:12.986219 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.158565 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-utilities\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.158623 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llktn\" (UniqueName: \"kubernetes.io/projected/c5db9abf-deb2-494a-b618-7180fbf1e53e-kube-api-access-llktn\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.158703 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-catalog-content\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.259982 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-catalog-content\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260094 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-utilities\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260131 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llktn\" (UniqueName: \"kubernetes.io/projected/c5db9abf-deb2-494a-b618-7180fbf1e53e-kube-api-access-llktn\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260628 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-catalog-content\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.260670 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5db9abf-deb2-494a-b618-7180fbf1e53e-utilities\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.276399 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llktn\" (UniqueName: \"kubernetes.io/projected/c5db9abf-deb2-494a-b618-7180fbf1e53e-kube-api-access-llktn\") pod \"redhat-operators-dtrsx\" (UID: \"c5db9abf-deb2-494a-b618-7180fbf1e53e\") " pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.299106 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.573518 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twpxx"] Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.575234 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.576828 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.585145 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twpxx"] Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.667215 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxmn\" (UniqueName: \"kubernetes.io/projected/d88e1938-2f4c-43c7-9af2-98fb7222cee2-kube-api-access-dqxmn\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.667263 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-utilities\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.667331 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-catalog-content\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.683913 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtrsx"] Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.768891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxmn\" (UniqueName: \"kubernetes.io/projected/d88e1938-2f4c-43c7-9af2-98fb7222cee2-kube-api-access-dqxmn\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.768957 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-utilities\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.769019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-catalog-content\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.769493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-catalog-content\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.770665 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d88e1938-2f4c-43c7-9af2-98fb7222cee2-utilities\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.789745 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxmn\" (UniqueName: \"kubernetes.io/projected/d88e1938-2f4c-43c7-9af2-98fb7222cee2-kube-api-access-dqxmn\") pod \"redhat-marketplace-twpxx\" (UID: \"d88e1938-2f4c-43c7-9af2-98fb7222cee2\") " pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:13 crc kubenswrapper[4769]: I0122 13:49:13.941880 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.329666 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twpxx"] Jan 22 13:49:14 crc kubenswrapper[4769]: W0122 13:49:14.358950 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88e1938_2f4c_43c7_9af2_98fb7222cee2.slice/crio-7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b WatchSource:0}: Error finding container 7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b: Status 404 returned error can't find the container with id 7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.625109 4769 generic.go:334] "Generic (PLEG): container finished" podID="c5db9abf-deb2-494a-b618-7180fbf1e53e" containerID="49753e10ea9e80b5b06c95d93825b264bdbd4245c3df1979127d3c6411fe8943" exitCode=0 Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.625229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerDied","Data":"49753e10ea9e80b5b06c95d93825b264bdbd4245c3df1979127d3c6411fe8943"} Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.625462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerStarted","Data":"4bd9bfec0be5434224f4e0d8160cdb43c11490454a6a97a2c42832fc0f091f60"} Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.629448 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerStarted","Data":"0fff4b1a88ef5daf500213bb00928a44781ebb9dc006c5fe161656f2c3a9e8a2"} Jan 22 13:49:14 crc kubenswrapper[4769]: I0122 13:49:14.629487 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerStarted","Data":"7398c539207c1069ae28abd790cf9fc265e19ae9d66293387a1794e1e2d2e94b"} Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.372887 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vlvj"] Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.374268 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.376196 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.389897 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vlvj"] Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.487827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tk5g\" (UniqueName: \"kubernetes.io/projected/6bbcc4b3-c280-4093-9419-7d94204256fe-kube-api-access-5tk5g\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.488651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-catalog-content\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.488848 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-utilities\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.590805 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tk5g\" (UniqueName: \"kubernetes.io/projected/6bbcc4b3-c280-4093-9419-7d94204256fe-kube-api-access-5tk5g\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.592302 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-catalog-content\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.592449 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-utilities\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.592870 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-catalog-content\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.593324 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bbcc4b3-c280-4093-9419-7d94204256fe-utilities\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.616000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tk5g\" (UniqueName: \"kubernetes.io/projected/6bbcc4b3-c280-4093-9419-7d94204256fe-kube-api-access-5tk5g\") pod \"certified-operators-8vlvj\" (UID: \"6bbcc4b3-c280-4093-9419-7d94204256fe\") " pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.636562 4769 generic.go:334] "Generic (PLEG): container finished" podID="d88e1938-2f4c-43c7-9af2-98fb7222cee2" containerID="0fff4b1a88ef5daf500213bb00928a44781ebb9dc006c5fe161656f2c3a9e8a2" exitCode=0 Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.636747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerDied","Data":"0fff4b1a88ef5daf500213bb00928a44781ebb9dc006c5fe161656f2c3a9e8a2"} Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.692369 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.978098 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8nrlf"] Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.979592 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.982554 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 13:49:15 crc kubenswrapper[4769]: I0122 13:49:15.988342 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nrlf"] Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.077581 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vlvj"] Jan 22 13:49:16 crc kubenswrapper[4769]: W0122 13:49:16.087220 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bbcc4b3_c280_4093_9419_7d94204256fe.slice/crio-00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf WatchSource:0}: Error finding container 00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf: Status 404 returned error can't find the container with id 00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.097358 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-catalog-content\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.097418 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqpf8\" (UniqueName: \"kubernetes.io/projected/5b9b79f2-127c-4533-a170-8cb16e845c18-kube-api-access-bqpf8\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.097437 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-utilities\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.198601 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqpf8\" (UniqueName: \"kubernetes.io/projected/5b9b79f2-127c-4533-a170-8cb16e845c18-kube-api-access-bqpf8\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.198658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-utilities\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.198715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-catalog-content\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.199220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-utilities\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.199274 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9b79f2-127c-4533-a170-8cb16e845c18-catalog-content\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.216356 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqpf8\" (UniqueName: \"kubernetes.io/projected/5b9b79f2-127c-4533-a170-8cb16e845c18-kube-api-access-bqpf8\") pod \"community-operators-8nrlf\" (UID: \"5b9b79f2-127c-4533-a170-8cb16e845c18\") " pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.297067 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.643939 4769 generic.go:334] "Generic (PLEG): container finished" podID="6bbcc4b3-c280-4093-9419-7d94204256fe" containerID="a7d3f114d84fdd1b7fc8a96a58d1e8a6cab446d40790a667348247eb14db6048" exitCode=0 Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.644061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerDied","Data":"a7d3f114d84fdd1b7fc8a96a58d1e8a6cab446d40790a667348247eb14db6048"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.644912 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerStarted","Data":"00e29ab23a9ff4dfebc8f6078c87526dfb3703b5fd76b23bb451588311bf12cf"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.648967 4769 generic.go:334] "Generic (PLEG): container finished" podID="d88e1938-2f4c-43c7-9af2-98fb7222cee2" containerID="f66755819f7254a689cbeefb6e794f94d5894872bff4f9c5b200a02dd002c683" exitCode=0 Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.649053 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerDied","Data":"f66755819f7254a689cbeefb6e794f94d5894872bff4f9c5b200a02dd002c683"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.653993 4769 generic.go:334] "Generic (PLEG): container finished" podID="c5db9abf-deb2-494a-b618-7180fbf1e53e" containerID="46c2d1490c2b3d837113558d5cc2951704a2c1cc8261955a692b3e63f7cd3d1b" exitCode=0 Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.654036 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerDied","Data":"46c2d1490c2b3d837113558d5cc2951704a2c1cc8261955a692b3e63f7cd3d1b"} Jan 22 13:49:16 crc kubenswrapper[4769]: I0122 13:49:16.682391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8nrlf"] Jan 22 13:49:16 crc kubenswrapper[4769]: W0122 13:49:16.693333 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9b79f2_127c_4533_a170_8cb16e845c18.slice/crio-79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1 WatchSource:0}: Error finding container 79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1: Status 404 returned error can't find the container with id 79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1 Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.667773 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twpxx" event={"ID":"d88e1938-2f4c-43c7-9af2-98fb7222cee2","Type":"ContainerStarted","Data":"0a3d25e60aeabb9720241aea7707a518021511464c51ba7e6020079946a70675"} Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.671675 4769 generic.go:334] "Generic (PLEG): container finished" podID="5b9b79f2-127c-4533-a170-8cb16e845c18" containerID="bfea64a322374f9fefb725dd0c996f81ee60b921f2c788b5f620e9e7d4d9118e" exitCode=0 Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.671751 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerDied","Data":"bfea64a322374f9fefb725dd0c996f81ee60b921f2c788b5f620e9e7d4d9118e"} Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.671835 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerStarted","Data":"79f279b0598123344907312ef57e1189a96915d7f3e641075cbc94cf7016cfa1"} Jan 22 13:49:17 crc kubenswrapper[4769]: I0122 13:49:17.690401 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twpxx" podStartSLOduration=2.118769379 podStartE2EDuration="4.690381295s" podCreationTimestamp="2026-01-22 13:49:13 +0000 UTC" firstStartedPulling="2026-01-22 13:49:14.631216912 +0000 UTC m=+334.042326841" lastFinishedPulling="2026-01-22 13:49:17.202828828 +0000 UTC m=+336.613938757" observedRunningTime="2026-01-22 13:49:17.685410451 +0000 UTC m=+337.096520380" watchObservedRunningTime="2026-01-22 13:49:17.690381295 +0000 UTC m=+337.101491234" Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.678411 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerStarted","Data":"30d77cde715c85c3ef50147b03698d9c5cc0d0b77b0369a4eb38e4795f5ee192"} Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.681087 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtrsx" event={"ID":"c5db9abf-deb2-494a-b618-7180fbf1e53e","Type":"ContainerStarted","Data":"40d697b4c769615858c7997f36004ed5a22a9f890686a7a882dfd468a26735dd"} Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.684939 4769 generic.go:334] "Generic (PLEG): container finished" podID="6bbcc4b3-c280-4093-9419-7d94204256fe" containerID="2e8cfc5abcfaebbc01e5c63a4c33838ac6db3f9d9a0ddc3d517cfd24231e91e3" exitCode=0 Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.685586 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerDied","Data":"2e8cfc5abcfaebbc01e5c63a4c33838ac6db3f9d9a0ddc3d517cfd24231e91e3"} Jan 22 13:49:18 crc kubenswrapper[4769]: I0122 13:49:18.746099 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dtrsx" podStartSLOduration=3.768730363 podStartE2EDuration="6.746083235s" podCreationTimestamp="2026-01-22 13:49:12 +0000 UTC" firstStartedPulling="2026-01-22 13:49:14.626540237 +0000 UTC m=+334.037650166" lastFinishedPulling="2026-01-22 13:49:17.603893109 +0000 UTC m=+337.015003038" observedRunningTime="2026-01-22 13:49:18.74348063 +0000 UTC m=+338.154590559" watchObservedRunningTime="2026-01-22 13:49:18.746083235 +0000 UTC m=+338.157193164" Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.692222 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vlvj" event={"ID":"6bbcc4b3-c280-4093-9419-7d94204256fe","Type":"ContainerStarted","Data":"ff12fb3e73ec2e549026400bc60ec25a5648bdb0ec104c5a57d93279a25a96d9"} Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.694759 4769 generic.go:334] "Generic (PLEG): container finished" podID="5b9b79f2-127c-4533-a170-8cb16e845c18" containerID="30d77cde715c85c3ef50147b03698d9c5cc0d0b77b0369a4eb38e4795f5ee192" exitCode=0 Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.694831 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerDied","Data":"30d77cde715c85c3ef50147b03698d9c5cc0d0b77b0369a4eb38e4795f5ee192"} Jan 22 13:49:19 crc kubenswrapper[4769]: I0122 13:49:19.713564 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vlvj" podStartSLOduration=2.231917061 podStartE2EDuration="4.713548148s" podCreationTimestamp="2026-01-22 13:49:15 +0000 UTC" firstStartedPulling="2026-01-22 13:49:16.648054723 +0000 UTC m=+336.059164642" lastFinishedPulling="2026-01-22 13:49:19.12968577 +0000 UTC m=+338.540795729" observedRunningTime="2026-01-22 13:49:19.711750997 +0000 UTC m=+339.122860946" watchObservedRunningTime="2026-01-22 13:49:19.713548148 +0000 UTC m=+339.124658077" Jan 22 13:49:21 crc kubenswrapper[4769]: I0122 13:49:21.707940 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8nrlf" event={"ID":"5b9b79f2-127c-4533-a170-8cb16e845c18","Type":"ContainerStarted","Data":"98eaddfcc73d3f67c6032f990f6435d2df30450e46ad2bda1c74b7fecd91fd0d"} Jan 22 13:49:21 crc kubenswrapper[4769]: I0122 13:49:21.726188 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8nrlf" podStartSLOduration=4.304054694 podStartE2EDuration="6.726170367s" podCreationTimestamp="2026-01-22 13:49:15 +0000 UTC" firstStartedPulling="2026-01-22 13:49:17.673107935 +0000 UTC m=+337.084217864" lastFinishedPulling="2026-01-22 13:49:20.095223608 +0000 UTC m=+339.506333537" observedRunningTime="2026-01-22 13:49:21.724653283 +0000 UTC m=+341.135763232" watchObservedRunningTime="2026-01-22 13:49:21.726170367 +0000 UTC m=+341.137280296" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.299823 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.300162 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.344695 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.751611 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dtrsx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.942202 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.942535 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:23 crc kubenswrapper[4769]: I0122 13:49:23.977715 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:24 crc kubenswrapper[4769]: I0122 13:49:24.757706 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twpxx" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.692653 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.692724 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.735581 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:25 crc kubenswrapper[4769]: I0122 13:49:25.772934 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vlvj" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.297415 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.298650 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.338058 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:26 crc kubenswrapper[4769]: I0122 13:49:26.770588 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8nrlf" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.780942 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fc69x"] Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.782035 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.798814 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fc69x"] Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931685 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-bound-sa-token\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931741 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0556840e-70ca-40ac-810a-11b1ddec78d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931844 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931878 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0556840e-70ca-40ac-810a-11b1ddec78d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-tls\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931930 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-certificates\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931951 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-trusted-ca\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.931981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-kube-api-access-sblrx\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:34 crc kubenswrapper[4769]: I0122 13:49:34.971257 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033615 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0556840e-70ca-40ac-810a-11b1ddec78d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0556840e-70ca-40ac-810a-11b1ddec78d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-tls\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-certificates\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-trusted-ca\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033827 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-kube-api-access-sblrx\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.033894 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-bound-sa-token\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.034215 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0556840e-70ca-40ac-810a-11b1ddec78d9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.035278 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-certificates\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.035863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0556840e-70ca-40ac-810a-11b1ddec78d9-trusted-ca\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.042223 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-registry-tls\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.046784 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0556840e-70ca-40ac-810a-11b1ddec78d9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.051674 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sblrx\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-kube-api-access-sblrx\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.052401 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0556840e-70ca-40ac-810a-11b1ddec78d9-bound-sa-token\") pod \"image-registry-66df7c8f76-fc69x\" (UID: \"0556840e-70ca-40ac-810a-11b1ddec78d9\") " pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.101483 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.559087 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fc69x"] Jan 22 13:49:35 crc kubenswrapper[4769]: W0122 13:49:35.567002 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0556840e_70ca_40ac_810a_11b1ddec78d9.slice/crio-894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed WatchSource:0}: Error finding container 894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed: Status 404 returned error can't find the container with id 894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed Jan 22 13:49:35 crc kubenswrapper[4769]: I0122 13:49:35.777336 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" event={"ID":"0556840e-70ca-40ac-810a-11b1ddec78d9","Type":"ContainerStarted","Data":"894cf8ee48d96c6ce67ad728ea7acc9a04ce91b31c737c897aedede5d47c72ed"} Jan 22 13:49:38 crc kubenswrapper[4769]: I0122 13:49:38.794272 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" event={"ID":"0556840e-70ca-40ac-810a-11b1ddec78d9","Type":"ContainerStarted","Data":"209ed7fbd942a144fd1ffafb5b0573b972f48af0d30d8d2d354eb55cc37b9920"} Jan 22 13:49:38 crc kubenswrapper[4769]: I0122 13:49:38.794587 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:38 crc kubenswrapper[4769]: I0122 13:49:38.813824 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" podStartSLOduration=4.813806501 podStartE2EDuration="4.813806501s" podCreationTimestamp="2026-01-22 13:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:38.810970319 +0000 UTC m=+358.222080248" watchObservedRunningTime="2026-01-22 13:49:38.813806501 +0000 UTC m=+358.224916430" Jan 22 13:49:40 crc kubenswrapper[4769]: I0122 13:49:40.481934 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:49:40 crc kubenswrapper[4769]: I0122 13:49:40.482013 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.442448 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.443335 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" containerID="cri-o://3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3" gracePeriod=30 Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811469 4769 generic.go:334] "Generic (PLEG): container finished" podID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerID="3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3" exitCode=0 Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811584 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerDied","Data":"3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3"} Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811763 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" event={"ID":"bf9268f0-d3a5-470c-b734-a25b11ebb088","Type":"ContainerDied","Data":"6cc1e5e19564d09af54c555b766313a9b3a7cbbeabd3df7a270e34fcad39380a"} Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.811779 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc1e5e19564d09af54c555b766313a9b3a7cbbeabd3df7a270e34fcad39380a" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.832203 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923261 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923384 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923416 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.923448 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") pod \"bf9268f0-d3a5-470c-b734-a25b11ebb088\" (UID: \"bf9268f0-d3a5-470c-b734-a25b11ebb088\") " Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.924389 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca" (OuterVolumeSpecName: "client-ca") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.924427 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config" (OuterVolumeSpecName: "config") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.928418 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx" (OuterVolumeSpecName: "kube-api-access-5mbhx") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "kube-api-access-5mbhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:49:41 crc kubenswrapper[4769]: I0122 13:49:41.928638 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bf9268f0-d3a5-470c-b734-a25b11ebb088" (UID: "bf9268f0-d3a5-470c-b734-a25b11ebb088"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025383 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9268f0-d3a5-470c-b734-a25b11ebb088-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025424 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025433 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbhx\" (UniqueName: \"kubernetes.io/projected/bf9268f0-d3a5-470c-b734-a25b11ebb088-kube-api-access-5mbhx\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.025445 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf9268f0-d3a5-470c-b734-a25b11ebb088-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.815232 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74" Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.840302 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.843277 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9db9fd7fb-fmp74"] Jan 22 13:49:42 crc kubenswrapper[4769]: I0122 13:49:42.891552 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" path="/var/lib/kubelet/pods/bf9268f0-d3a5-470c-b734-a25b11ebb088/volumes" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.455595 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h"] Jan 22 13:49:43 crc kubenswrapper[4769]: E0122 13:49:43.455907 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.455931 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.456060 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9268f0-d3a5-470c-b734-a25b11ebb088" containerName="route-controller-manager" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.456502 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460279 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460288 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460340 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460354 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460472 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.460505 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.465747 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h"] Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.544500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-client-ca\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.544809 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-config\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.545001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxtv\" (UniqueName: \"kubernetes.io/projected/0624b060-2bdf-4498-9a39-3c13923de378-kube-api-access-shxtv\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.545132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0624b060-2bdf-4498-9a39-3c13923de378-serving-cert\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxtv\" (UniqueName: \"kubernetes.io/projected/0624b060-2bdf-4498-9a39-3c13923de378-kube-api-access-shxtv\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646645 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0624b060-2bdf-4498-9a39-3c13923de378-serving-cert\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646679 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-client-ca\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.646712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-config\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.647912 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-client-ca\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.648871 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0624b060-2bdf-4498-9a39-3c13923de378-config\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.653088 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0624b060-2bdf-4498-9a39-3c13923de378-serving-cert\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.664193 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxtv\" (UniqueName: \"kubernetes.io/projected/0624b060-2bdf-4498-9a39-3c13923de378-kube-api-access-shxtv\") pod \"route-controller-manager-7b57bf8468-vhp4h\" (UID: \"0624b060-2bdf-4498-9a39-3c13923de378\") " pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:43 crc kubenswrapper[4769]: I0122 13:49:43.772711 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.181822 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h"] Jan 22 13:49:44 crc kubenswrapper[4769]: W0122 13:49:44.187809 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0624b060_2bdf_4498_9a39_3c13923de378.slice/crio-2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67 WatchSource:0}: Error finding container 2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67: Status 404 returned error can't find the container with id 2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67 Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.829920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" event={"ID":"0624b060-2bdf-4498-9a39-3c13923de378","Type":"ContainerStarted","Data":"843cbe9217f2b579d9535d27280ed4c9dcec2cc2f1248156f49c49a28bfccfb8"} Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.829988 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" event={"ID":"0624b060-2bdf-4498-9a39-3c13923de378","Type":"ContainerStarted","Data":"2db9bae66c453b201eeff18ef735234e01bc923a438f6b2bc730a7e0b9cb1b67"} Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.830530 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:44 crc kubenswrapper[4769]: I0122 13:49:44.852914 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" podStartSLOduration=3.8528934919999998 podStartE2EDuration="3.852893492s" podCreationTimestamp="2026-01-22 13:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:49:44.847207427 +0000 UTC m=+364.258317406" watchObservedRunningTime="2026-01-22 13:49:44.852893492 +0000 UTC m=+364.264003441" Jan 22 13:49:45 crc kubenswrapper[4769]: I0122 13:49:45.010739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b57bf8468-vhp4h" Jan 22 13:49:55 crc kubenswrapper[4769]: I0122 13:49:55.111771 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fc69x" Jan 22 13:49:55 crc kubenswrapper[4769]: I0122 13:49:55.173022 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:50:10 crc kubenswrapper[4769]: I0122 13:50:10.481691 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:50:10 crc kubenswrapper[4769]: I0122 13:50:10.482300 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.222404 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" containerID="cri-o://bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" gracePeriod=30 Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.578436 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682351 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682434 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682496 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682562 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.682659 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.683013 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.683830 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.683858 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.684092 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\" (UID: \"75dcccce-425a-46ab-bfeb-dc5a0ee835d4\") " Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.684316 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.684329 4769 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.691469 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.691903 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn" (OuterVolumeSpecName: "kube-api-access-vg9rn") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "kube-api-access-vg9rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.692674 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.692921 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.695781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.702305 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "75dcccce-425a-46ab-bfeb-dc5a0ee835d4" (UID: "75dcccce-425a-46ab-bfeb-dc5a0ee835d4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785281 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785325 4769 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785339 4769 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785356 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg9rn\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-kube-api-access-vg9rn\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:20 crc kubenswrapper[4769]: I0122 13:50:20.785371 4769 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dcccce-425a-46ab-bfeb-dc5a0ee835d4-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019817 4769 generic.go:334] "Generic (PLEG): container finished" podID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" exitCode=0 Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019853 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerDied","Data":"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484"} Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" event={"ID":"75dcccce-425a-46ab-bfeb-dc5a0ee835d4","Type":"ContainerDied","Data":"65a07796fc29ddbb6109cfb9449db8675835bbaed67ec222e3b441daddcd1e4a"} Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019894 4769 scope.go:117] "RemoveContainer" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.019993 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jhd8d" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.039255 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.043431 4769 scope.go:117] "RemoveContainer" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.044726 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jhd8d"] Jan 22 13:50:21 crc kubenswrapper[4769]: E0122 13:50:21.044751 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484\": container with ID starting with bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484 not found: ID does not exist" containerID="bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484" Jan 22 13:50:21 crc kubenswrapper[4769]: I0122 13:50:21.044848 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484"} err="failed to get container status \"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484\": rpc error: code = NotFound desc = could not find container \"bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484\": container with ID starting with bc3d673f0c6c961ce4f8660b81b0fde6d0b971f745bc5a43865df409316c3484 not found: ID does not exist" Jan 22 13:50:22 crc kubenswrapper[4769]: I0122 13:50:22.891752 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" path="/var/lib/kubelet/pods/75dcccce-425a-46ab-bfeb-dc5a0ee835d4/volumes" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.482465 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.482972 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.483058 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.483555 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:50:40 crc kubenswrapper[4769]: I0122 13:50:40.483598 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41" gracePeriod=600 Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131297 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41" exitCode=0 Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131392 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41"} Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d"} Jan 22 13:50:41 crc kubenswrapper[4769]: I0122 13:50:41.131878 4769 scope.go:117] "RemoveContainer" containerID="9528976f6e0625739097546d794445c24881673bfd0df525a77bbbd61e67897d" Jan 22 13:52:40 crc kubenswrapper[4769]: I0122 13:52:40.481731 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:52:40 crc kubenswrapper[4769]: I0122 13:52:40.483506 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:52:41 crc kubenswrapper[4769]: I0122 13:52:41.054237 4769 scope.go:117] "RemoveContainer" containerID="2f10c10086311c3110b8a32a37138f280d5ba030f8b232e9aab33f5fe28c6210" Jan 22 13:53:10 crc kubenswrapper[4769]: I0122 13:53:10.482586 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:53:10 crc kubenswrapper[4769]: I0122 13:53:10.483209 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.482300 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.482899 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.483079 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.483740 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:53:40 crc kubenswrapper[4769]: I0122 13:53:40.483863 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d" gracePeriod=600 Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375079 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d" exitCode=0 Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375827 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d"} Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17"} Jan 22 13:53:41 crc kubenswrapper[4769]: I0122 13:53:41.375906 4769 scope.go:117] "RemoveContainer" containerID="bbd22e04ee72948953a90ab44939dc109e22abcfa3a37b3bf1a288ca6535ed41" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.488860 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb"] Jan 22 13:54:36 crc kubenswrapper[4769]: E0122 13:54:36.489536 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.489548 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.489649 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="75dcccce-425a-46ab-bfeb-dc5a0ee835d4" containerName="registry" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.490053 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.493029 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.493228 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.499029 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-shtxc" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.499108 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-vn9qf"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.503918 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.507176 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-4dgt9" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.511617 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.534631 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vn9qf"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.538914 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzj2v"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.539535 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.542179 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tlbpw" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.549975 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzj2v"] Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.577840 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqs5\" (UniqueName: \"kubernetes.io/projected/2bdf39e4-511e-4d06-a19a-7aa0cda68e94-kube-api-access-7rqs5\") pod \"cert-manager-cainjector-cf98fcc89-ptnxb\" (UID: \"2bdf39e4-511e-4d06-a19a-7aa0cda68e94\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.577963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzgr\" (UniqueName: \"kubernetes.io/projected/0390ceac-8902-475a-b739-ddc13392f828-kube-api-access-dhzgr\") pod \"cert-manager-858654f9db-vn9qf\" (UID: \"0390ceac-8902-475a-b739-ddc13392f828\") " pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.678591 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzgr\" (UniqueName: \"kubernetes.io/projected/0390ceac-8902-475a-b739-ddc13392f828-kube-api-access-dhzgr\") pod \"cert-manager-858654f9db-vn9qf\" (UID: \"0390ceac-8902-475a-b739-ddc13392f828\") " pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.678665 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68w5\" (UniqueName: \"kubernetes.io/projected/e3a1ec89-c852-4274-b95b-c070b9cf8c20-kube-api-access-x68w5\") pod \"cert-manager-webhook-687f57d79b-dzj2v\" (UID: \"e3a1ec89-c852-4274-b95b-c070b9cf8c20\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.678696 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqs5\" (UniqueName: \"kubernetes.io/projected/2bdf39e4-511e-4d06-a19a-7aa0cda68e94-kube-api-access-7rqs5\") pod \"cert-manager-cainjector-cf98fcc89-ptnxb\" (UID: \"2bdf39e4-511e-4d06-a19a-7aa0cda68e94\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.698577 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqs5\" (UniqueName: \"kubernetes.io/projected/2bdf39e4-511e-4d06-a19a-7aa0cda68e94-kube-api-access-7rqs5\") pod \"cert-manager-cainjector-cf98fcc89-ptnxb\" (UID: \"2bdf39e4-511e-4d06-a19a-7aa0cda68e94\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.698670 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzgr\" (UniqueName: \"kubernetes.io/projected/0390ceac-8902-475a-b739-ddc13392f828-kube-api-access-dhzgr\") pod \"cert-manager-858654f9db-vn9qf\" (UID: \"0390ceac-8902-475a-b739-ddc13392f828\") " pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.779345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x68w5\" (UniqueName: \"kubernetes.io/projected/e3a1ec89-c852-4274-b95b-c070b9cf8c20-kube-api-access-x68w5\") pod \"cert-manager-webhook-687f57d79b-dzj2v\" (UID: \"e3a1ec89-c852-4274-b95b-c070b9cf8c20\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.798145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x68w5\" (UniqueName: \"kubernetes.io/projected/e3a1ec89-c852-4274-b95b-c070b9cf8c20-kube-api-access-x68w5\") pod \"cert-manager-webhook-687f57d79b-dzj2v\" (UID: \"e3a1ec89-c852-4274-b95b-c070b9cf8c20\") " pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.841433 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.857834 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-vn9qf" Jan 22 13:54:36 crc kubenswrapper[4769]: I0122 13:54:36.866101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.058917 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb"] Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.076505 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.093478 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-dzj2v"] Jan 22 13:54:37 crc kubenswrapper[4769]: W0122 13:54:37.097771 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3a1ec89_c852_4274_b95b_c070b9cf8c20.slice/crio-e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315 WatchSource:0}: Error finding container e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315: Status 404 returned error can't find the container with id e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315 Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.132002 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-vn9qf"] Jan 22 13:54:37 crc kubenswrapper[4769]: W0122 13:54:37.135115 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0390ceac_8902_475a_b739_ddc13392f828.slice/crio-d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8 WatchSource:0}: Error finding container d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8: Status 404 returned error can't find the container with id d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8 Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.687195 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vn9qf" event={"ID":"0390ceac-8902-475a-b739-ddc13392f828","Type":"ContainerStarted","Data":"d38762fd08b6a6d29c8149d97a505977701f8bcc332bd95de358b235fe8c13c8"} Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.688750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" event={"ID":"e3a1ec89-c852-4274-b95b-c070b9cf8c20","Type":"ContainerStarted","Data":"e70a4d3c8d180494243ebf67ca29b136488035bcbacd715e67cae2295384e315"} Jan 22 13:54:37 crc kubenswrapper[4769]: I0122 13:54:37.689761 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" event={"ID":"2bdf39e4-511e-4d06-a19a-7aa0cda68e94","Type":"ContainerStarted","Data":"80474f87f16d034b976b0a7d6850685afd199dca4250009888c5348f6b819510"} Jan 22 13:54:40 crc kubenswrapper[4769]: I0122 13:54:40.710084 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-vn9qf" event={"ID":"0390ceac-8902-475a-b739-ddc13392f828","Type":"ContainerStarted","Data":"bf4502bda093bf1c79d6ac2be6d5c6ef1715f46fb8ee6d50bfb3a3dff015df65"} Jan 22 13:54:40 crc kubenswrapper[4769]: I0122 13:54:40.732328 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-vn9qf" podStartSLOduration=1.991057149 podStartE2EDuration="4.732304426s" podCreationTimestamp="2026-01-22 13:54:36 +0000 UTC" firstStartedPulling="2026-01-22 13:54:37.138398792 +0000 UTC m=+656.549508721" lastFinishedPulling="2026-01-22 13:54:39.879646069 +0000 UTC m=+659.290755998" observedRunningTime="2026-01-22 13:54:40.723060013 +0000 UTC m=+660.134169962" watchObservedRunningTime="2026-01-22 13:54:40.732304426 +0000 UTC m=+660.143414355" Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.719289 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" event={"ID":"e3a1ec89-c852-4274-b95b-c070b9cf8c20","Type":"ContainerStarted","Data":"cd57fd84c5caacb814ca56519a37f9ee73e612e7657236a80acee23f6147eb1d"} Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.719874 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.722734 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" event={"ID":"2bdf39e4-511e-4d06-a19a-7aa0cda68e94","Type":"ContainerStarted","Data":"8317071a82211f0e5aacdba958f3bbab1b6b1b216e23a1b333561f916cd25a85"} Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.745193 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" podStartSLOduration=1.685460223 podStartE2EDuration="5.745169446s" podCreationTimestamp="2026-01-22 13:54:36 +0000 UTC" firstStartedPulling="2026-01-22 13:54:37.101178989 +0000 UTC m=+656.512288918" lastFinishedPulling="2026-01-22 13:54:41.160888212 +0000 UTC m=+660.571998141" observedRunningTime="2026-01-22 13:54:41.739190131 +0000 UTC m=+661.150300100" watchObservedRunningTime="2026-01-22 13:54:41.745169446 +0000 UTC m=+661.156279415" Jan 22 13:54:41 crc kubenswrapper[4769]: I0122 13:54:41.763230 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-ptnxb" podStartSLOduration=1.685765062 podStartE2EDuration="5.763196281s" podCreationTimestamp="2026-01-22 13:54:36 +0000 UTC" firstStartedPulling="2026-01-22 13:54:37.076226144 +0000 UTC m=+656.487336073" lastFinishedPulling="2026-01-22 13:54:41.153657363 +0000 UTC m=+660.564767292" observedRunningTime="2026-01-22 13:54:41.754664916 +0000 UTC m=+661.165774845" watchObservedRunningTime="2026-01-22 13:54:41.763196281 +0000 UTC m=+661.174306250" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.363538 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364428 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" containerID="cri-o://f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364443 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364484 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" containerID="cri-o://a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364553 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" containerID="cri-o://599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364611 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" containerID="cri-o://3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364528 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" containerID="cri-o://662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.364384 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" containerID="cri-o://73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.423445 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" containerID="cri-o://d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" gracePeriod=30 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.656713 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.659175 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-acl-logging/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.659907 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-controller/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.660340 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716521 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fg2hx"] Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716770 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716817 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716852 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716862 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716871 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716880 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716893 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kubecfg-setup" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716902 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kubecfg-setup" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716914 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716922 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716934 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716942 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716952 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716959 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716969 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716977 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.716987 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.716994 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717005 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717013 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717023 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717031 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717044 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717051 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717154 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-acl-logging" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717168 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717181 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-node" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717190 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717198 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717206 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717214 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="northd" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717225 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="nbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717235 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovn-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717243 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="sbdb" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.717372 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717381 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717530 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.717737 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerName="ovnkube-controller" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.719891 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.754636 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/2.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755165 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/1.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755215 4769 generic.go:334] "Generic (PLEG): container finished" podID="d4186e93-df8a-49d3-9068-c8b8acd05baa" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" exitCode=2 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerDied","Data":"8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.755453 4769 scope.go:117] "RemoveContainer" containerID="ffa3ce92a87f448f60b39283929d77139230e6bb0052cdeb6303e0f6b13997d8" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.756094 4769 scope.go:117] "RemoveContainer" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.756417 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fclh4_openshift-multus(d4186e93-df8a-49d3-9068-c8b8acd05baa)\"" pod="openshift-multus/multus-fclh4" podUID="d4186e93-df8a-49d3-9068-c8b8acd05baa" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.758061 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovnkube-controller/3.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763077 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-acl-logging/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763507 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jrg8z_9c028db8-99b9-422d-ba46-e1a2db06ce3c/ovn-controller/0.log" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763855 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763871 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763879 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763886 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763893 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763901 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" exitCode=0 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763908 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" exitCode=143 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763916 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" exitCode=143 Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763934 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763956 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763968 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763968 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.763978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764081 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764094 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764108 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764118 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764125 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764130 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764135 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764141 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764145 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764150 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764155 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764159 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764167 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764175 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764181 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764186 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764191 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764197 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764203 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764208 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764213 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764217 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764222 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764236 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764243 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764249 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764255 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764261 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764268 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764274 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764281 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764287 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764293 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764302 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jrg8z" event={"ID":"9c028db8-99b9-422d-ba46-e1a2db06ce3c","Type":"ContainerDied","Data":"e2d3c55e05f15106417cacacd13bd2ff48a7d39f5b85eb5a6e946e2cf2413457"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764311 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764322 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764332 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764342 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764348 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764355 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764361 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764367 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764373 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.764379 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.788032 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.803379 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.819819 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820500 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820629 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820623 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820679 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820733 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820779 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820827 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820849 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820997 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.820997 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821068 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821110 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821151 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821187 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821407 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821651 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821293 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821491 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821739 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821740 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821780 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket" (OuterVolumeSpecName: "log-socket") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.821933 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822018 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822061 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822090 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822133 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822251 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") pod \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\" (UID: \"9c028db8-99b9-422d-ba46-e1a2db06ce3c\") " Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log" (OuterVolumeSpecName: "node-log") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822342 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822387 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.822453 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash" (OuterVolumeSpecName: "host-slash") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823035 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-netns\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823115 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-netd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-env-overrides\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823367 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-ovn\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823509 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-var-lib-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-slash\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823579 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-systemd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-node-log\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823807 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823843 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-log-socket\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.823896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-systemd-units\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824003 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5426c965-79a4-46ea-b709-949e0a5e3065-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824053 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-bin\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824098 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-config\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824130 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-kubelet\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824201 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-etc-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824297 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-script-lib\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824352 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28zg\" (UniqueName: \"kubernetes.io/projected/5426c965-79a4-46ea-b709-949e0a5e3065-kube-api-access-w28zg\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824467 4769 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824487 4769 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824500 4769 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824514 4769 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824529 4769 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824542 4769 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824601 4769 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824614 4769 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824627 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824640 4769 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824653 4769 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824666 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824678 4769 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824691 4769 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824703 4769 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824715 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.824727 4769 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.828225 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.828352 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w" (OuterVolumeSpecName: "kube-api-access-p276w") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "kube-api-access-p276w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.835017 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9c028db8-99b9-422d-ba46-e1a2db06ce3c" (UID: "9c028db8-99b9-422d-ba46-e1a2db06ce3c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.835916 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.852845 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.869340 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.869347 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-dzj2v" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.891491 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.912219 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.925995 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-ovn\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926084 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-var-lib-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926109 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-slash\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926133 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-systemd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926182 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-node-log\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926210 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926287 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-log-socket\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-systemd-units\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5426c965-79a4-46ea-b709-949e0a5e3065-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926382 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-bin\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926415 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-etc-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926434 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-config\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926455 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-kubelet\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926485 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-script-lib\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926505 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w28zg\" (UniqueName: \"kubernetes.io/projected/5426c965-79a4-46ea-b709-949e0a5e3065-kube-api-access-w28zg\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926535 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-netd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926553 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-netns\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926577 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-env-overrides\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926609 4769 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c028db8-99b9-422d-ba46-e1a2db06ce3c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926619 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p276w\" (UniqueName: \"kubernetes.io/projected/9c028db8-99b9-422d-ba46-e1a2db06ce3c-kube-api-access-p276w\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.926632 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c028db8-99b9-422d-ba46-e1a2db06ce3c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927147 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-systemd-units\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927212 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-bin\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-ovn\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927359 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-var-lib-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927399 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-slash\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927435 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-systemd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927465 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-node-log\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927492 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-run-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-ovn-kubernetes\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927529 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-etc-openvswitch\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927621 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-env-overrides\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927624 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-run-netns\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927683 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-kubelet\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927707 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-log-socket\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.927983 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5426c965-79a4-46ea-b709-949e0a5e3065-host-cni-netd\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.928439 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-script-lib\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.928483 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.928887 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5426c965-79a4-46ea-b709-949e0a5e3065-ovnkube-config\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.932904 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5426c965-79a4-46ea-b709-949e0a5e3065-ovn-node-metrics-cert\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.942971 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w28zg\" (UniqueName: \"kubernetes.io/projected/5426c965-79a4-46ea-b709-949e0a5e3065-kube-api-access-w28zg\") pod \"ovnkube-node-fg2hx\" (UID: \"5426c965-79a4-46ea-b709-949e0a5e3065\") " pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.948379 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.962933 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.963343 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963401 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963431 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.963865 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963899 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.963921 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.964369 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964461 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964479 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.964726 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964754 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.964770 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.965075 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965097 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965109 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.965364 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965424 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965461 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.965831 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965863 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.965884 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.966137 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966164 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966195 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.966461 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966491 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966508 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: E0122 13:54:46.966806 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966836 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.966855 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967058 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967090 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967300 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967333 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967585 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967616 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967868 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.967894 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968098 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968121 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968349 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968373 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968574 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968597 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968776 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.968828 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969041 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969063 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969249 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969269 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969466 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969486 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969674 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.969695 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970160 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970187 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970408 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970428 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970634 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970656 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.970976 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971007 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971228 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971254 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971442 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971465 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971651 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.971673 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972552 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972578 4769 scope.go:117] "RemoveContainer" containerID="d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972833 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b"} err="failed to get container status \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": rpc error: code = NotFound desc = could not find container \"d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b\": container with ID starting with d764d72b65db80595dfba72a7b23c9291cb7dd526ced59c756d4b53cc30aaa5b not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.972856 4769 scope.go:117] "RemoveContainer" containerID="5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973070 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6"} err="failed to get container status \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": rpc error: code = NotFound desc = could not find container \"5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6\": container with ID starting with 5f2aa40f7fb64759fa8a5f718239811c0af3f0c9eee1849d5e53acc1d4c486b6 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973095 4769 scope.go:117] "RemoveContainer" containerID="3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973311 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821"} err="failed to get container status \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": rpc error: code = NotFound desc = could not find container \"3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821\": container with ID starting with 3e7c6281ad26145a3acc0fc698848d06ef0524025a79cc0785cfdf1f40828821 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973337 4769 scope.go:117] "RemoveContainer" containerID="a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973523 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571"} err="failed to get container status \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": rpc error: code = NotFound desc = could not find container \"a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571\": container with ID starting with a63e4e37243c1da0493d2f2a1c468348a4aee47032e54fe98f0047bbddc3b571 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973545 4769 scope.go:117] "RemoveContainer" containerID="662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973703 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94"} err="failed to get container status \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": rpc error: code = NotFound desc = could not find container \"662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94\": container with ID starting with 662c6260e9da9eb76bae643e418ec5963e4c83b269478d9c2d9974064f726b94 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973725 4769 scope.go:117] "RemoveContainer" containerID="926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973967 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624"} err="failed to get container status \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": rpc error: code = NotFound desc = could not find container \"926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624\": container with ID starting with 926f38072715b258e544a03416a53df519d2a6a7328a1c4f1d14a55f117e3624 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.973989 4769 scope.go:117] "RemoveContainer" containerID="f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974160 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44"} err="failed to get container status \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": rpc error: code = NotFound desc = could not find container \"f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44\": container with ID starting with f59523bb3634d53e91f03df80b561ae6297baa7382419b522b14fcdeb1aecb44 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974183 4769 scope.go:117] "RemoveContainer" containerID="599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974350 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9"} err="failed to get container status \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": rpc error: code = NotFound desc = could not find container \"599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9\": container with ID starting with 599a0ac550c976e0acee1c2d4234f97ef0f1691126383a3664da9949a7b5d2f9 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974379 4769 scope.go:117] "RemoveContainer" containerID="73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974628 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609"} err="failed to get container status \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": rpc error: code = NotFound desc = could not find container \"73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609\": container with ID starting with 73572e052fc3880d97e5595186b1fd2a3ac26f2c357fe5c53fa746d7ffabd609 not found: ID does not exist" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974654 4769 scope.go:117] "RemoveContainer" containerID="bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7" Jan 22 13:54:46 crc kubenswrapper[4769]: I0122 13:54:46.974883 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7"} err="failed to get container status \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": rpc error: code = NotFound desc = could not find container \"bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7\": container with ID starting with bec914c32dc752578f4d1a80f7a38b2a5a9a00aeb7d7ff1d17be955eea0343c7 not found: ID does not exist" Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.038145 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.104875 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.110525 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jrg8z"] Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.771151 4769 generic.go:334] "Generic (PLEG): container finished" podID="5426c965-79a4-46ea-b709-949e0a5e3065" containerID="08e06714602e0437c8faa07572c975ea10b2559622327eb668e75ca879a08e8e" exitCode=0 Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.771228 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerDied","Data":"08e06714602e0437c8faa07572c975ea10b2559622327eb668e75ca879a08e8e"} Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.771295 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"9c03d0d604a1fdcab84ed3e1ccfb05929328f575ebbcd28482da2348e89ffe3b"} Jan 22 13:54:47 crc kubenswrapper[4769]: I0122 13:54:47.777755 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/2.log" Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786449 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"4093a2a7e4de81ed357b13c0dd6bda0022fc81bb11e2545dab031db1f97fbfbc"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786770 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"216e5d493ab6a7220a5e0b1b7060228dc5b35f33db0e39260bf16b54571ed24a"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786782 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"7d3f0193784f8def9429bb29a43f8846eb077816e7dc8e432561502f25fa7e28"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786813 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"556c5415aa810a0c23f3ddb28b87a525af8757c4278234ab9dd66732b0ff8ee1"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786824 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"4a335007f079f2ed399a0bd85c2fff302757fd7210eb4f8c7d454205b397f5e8"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.786832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"b8007a22145ce439a54b7f443d0e5e5a15a425ebb0e71b28a29aede8aff375b4"} Jan 22 13:54:48 crc kubenswrapper[4769]: I0122 13:54:48.891616 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c028db8-99b9-422d-ba46-e1a2db06ce3c" path="/var/lib/kubelet/pods/9c028db8-99b9-422d-ba46-e1a2db06ce3c/volumes" Jan 22 13:54:50 crc kubenswrapper[4769]: I0122 13:54:50.802329 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"6cdf34ed3858d7dac5b9f1e6fa20d6c2d49f0852b3b073d23cf4b9e75c3f6e23"} Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.827588 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" event={"ID":"5426c965-79a4-46ea-b709-949e0a5e3065","Type":"ContainerStarted","Data":"2a32d995333c2eacd75275baaa95cf7274a2ae675c2ed6497c2574613548d4f0"} Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.828225 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.828293 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.855966 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:53 crc kubenswrapper[4769]: I0122 13:54:53.867641 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" podStartSLOduration=7.867624266 podStartE2EDuration="7.867624266s" podCreationTimestamp="2026-01-22 13:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:54:53.865156748 +0000 UTC m=+673.276266697" watchObservedRunningTime="2026-01-22 13:54:53.867624266 +0000 UTC m=+673.278734195" Jan 22 13:54:54 crc kubenswrapper[4769]: I0122 13:54:54.832985 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:54 crc kubenswrapper[4769]: I0122 13:54:54.862419 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:54:58 crc kubenswrapper[4769]: I0122 13:54:58.883829 4769 scope.go:117] "RemoveContainer" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" Jan 22 13:54:58 crc kubenswrapper[4769]: E0122 13:54:58.884643 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fclh4_openshift-multus(d4186e93-df8a-49d3-9068-c8b8acd05baa)\"" pod="openshift-multus/multus-fclh4" podUID="d4186e93-df8a-49d3-9068-c8b8acd05baa" Jan 22 13:55:13 crc kubenswrapper[4769]: I0122 13:55:13.883625 4769 scope.go:117] "RemoveContainer" containerID="8b525990498eb9a71e43d42c3191a2ad5043bcf24c857f8db1dc71b1a487d0c3" Jan 22 13:55:14 crc kubenswrapper[4769]: I0122 13:55:14.961011 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fclh4_d4186e93-df8a-49d3-9068-c8b8acd05baa/kube-multus/2.log" Jan 22 13:55:14 crc kubenswrapper[4769]: I0122 13:55:14.961542 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fclh4" event={"ID":"d4186e93-df8a-49d3-9068-c8b8acd05baa","Type":"ContainerStarted","Data":"f792b3c29b906b7ea6f4c0ef1e8550b85afba18327b0c1d9f0d5e9adbf131ef2"} Jan 22 13:55:17 crc kubenswrapper[4769]: I0122 13:55:17.066670 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fg2hx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.796393 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx"] Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.798210 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.800060 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.805609 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx"] Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.904174 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.904379 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:26 crc kubenswrapper[4769]: I0122 13:55:26.904443 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.005530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.005618 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.005681 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.006216 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.006484 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.028454 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.119265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:27 crc kubenswrapper[4769]: I0122 13:55:27.340219 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx"] Jan 22 13:55:28 crc kubenswrapper[4769]: I0122 13:55:28.042972 4769 generic.go:334] "Generic (PLEG): container finished" podID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerID="19064cbb406cb69a973f646d395d3f54b43223b566983ec672b8d9a56ee5a4be" exitCode=0 Jan 22 13:55:28 crc kubenswrapper[4769]: I0122 13:55:28.043254 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"19064cbb406cb69a973f646d395d3f54b43223b566983ec672b8d9a56ee5a4be"} Jan 22 13:55:28 crc kubenswrapper[4769]: I0122 13:55:28.043286 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerStarted","Data":"2d39cf951748c2931cff939383e6cc1c867717da795501975d02dd23004aa1aa"} Jan 22 13:55:30 crc kubenswrapper[4769]: I0122 13:55:30.055691 4769 generic.go:334] "Generic (PLEG): container finished" podID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerID="775c8e064f9886ed088946a1c3372fb6398eec8196bd2d1a4eee646c3050fd6e" exitCode=0 Jan 22 13:55:30 crc kubenswrapper[4769]: I0122 13:55:30.055827 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"775c8e064f9886ed088946a1c3372fb6398eec8196bd2d1a4eee646c3050fd6e"} Jan 22 13:55:31 crc kubenswrapper[4769]: I0122 13:55:31.066038 4769 generic.go:334] "Generic (PLEG): container finished" podID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerID="5b9fdd30766e2dbe2204dc878f575cb4f8ab94cb3fdf3ac93191b1f5678788b8" exitCode=0 Jan 22 13:55:31 crc kubenswrapper[4769]: I0122 13:55:31.066248 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"5b9fdd30766e2dbe2204dc878f575cb4f8ab94cb3fdf3ac93191b1f5678788b8"} Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.334050 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.474386 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") pod \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.474447 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") pod \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.474518 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") pod \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\" (UID: \"38dd0c5f-6afb-4730-8900-e3e8b33f282a\") " Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.475750 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle" (OuterVolumeSpecName: "bundle") pod "38dd0c5f-6afb-4730-8900-e3e8b33f282a" (UID: "38dd0c5f-6afb-4730-8900-e3e8b33f282a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.481287 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz" (OuterVolumeSpecName: "kube-api-access-czqjz") pod "38dd0c5f-6afb-4730-8900-e3e8b33f282a" (UID: "38dd0c5f-6afb-4730-8900-e3e8b33f282a"). InnerVolumeSpecName "kube-api-access-czqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.497984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util" (OuterVolumeSpecName: "util") pod "38dd0c5f-6afb-4730-8900-e3e8b33f282a" (UID: "38dd0c5f-6afb-4730-8900-e3e8b33f282a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.576602 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-util\") on node \"crc\" DevicePath \"\"" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.576644 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czqjz\" (UniqueName: \"kubernetes.io/projected/38dd0c5f-6afb-4730-8900-e3e8b33f282a-kube-api-access-czqjz\") on node \"crc\" DevicePath \"\"" Jan 22 13:55:32 crc kubenswrapper[4769]: I0122 13:55:32.576659 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38dd0c5f-6afb-4730-8900-e3e8b33f282a-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:55:33 crc kubenswrapper[4769]: I0122 13:55:33.079090 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" event={"ID":"38dd0c5f-6afb-4730-8900-e3e8b33f282a","Type":"ContainerDied","Data":"2d39cf951748c2931cff939383e6cc1c867717da795501975d02dd23004aa1aa"} Jan 22 13:55:33 crc kubenswrapper[4769]: I0122 13:55:33.079402 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d39cf951748c2931cff939383e6cc1c867717da795501975d02dd23004aa1aa" Jan 22 13:55:33 crc kubenswrapper[4769]: I0122 13:55:33.079316 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269128 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z29kl"] Jan 22 13:55:35 crc kubenswrapper[4769]: E0122 13:55:35.269329 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="extract" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269341 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="extract" Jan 22 13:55:35 crc kubenswrapper[4769]: E0122 13:55:35.269354 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="util" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269359 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="util" Jan 22 13:55:35 crc kubenswrapper[4769]: E0122 13:55:35.269372 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="pull" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269377 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="pull" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269473 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="38dd0c5f-6afb-4730-8900-e3e8b33f282a" containerName="extract" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.269832 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.271603 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.271648 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.271669 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-m2zc9" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.281152 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z29kl"] Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.423094 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z45f8\" (UniqueName: \"kubernetes.io/projected/9342ab94-785a-427b-84d2-5ac6ff709531-kube-api-access-z45f8\") pod \"nmstate-operator-646758c888-z29kl\" (UID: \"9342ab94-785a-427b-84d2-5ac6ff709531\") " pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.524443 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z45f8\" (UniqueName: \"kubernetes.io/projected/9342ab94-785a-427b-84d2-5ac6ff709531-kube-api-access-z45f8\") pod \"nmstate-operator-646758c888-z29kl\" (UID: \"9342ab94-785a-427b-84d2-5ac6ff709531\") " pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.565297 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z45f8\" (UniqueName: \"kubernetes.io/projected/9342ab94-785a-427b-84d2-5ac6ff709531-kube-api-access-z45f8\") pod \"nmstate-operator-646758c888-z29kl\" (UID: \"9342ab94-785a-427b-84d2-5ac6ff709531\") " pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.628206 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" Jan 22 13:55:35 crc kubenswrapper[4769]: I0122 13:55:35.806795 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-z29kl"] Jan 22 13:55:35 crc kubenswrapper[4769]: W0122 13:55:35.816018 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9342ab94_785a_427b_84d2_5ac6ff709531.slice/crio-069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c WatchSource:0}: Error finding container 069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c: Status 404 returned error can't find the container with id 069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c Jan 22 13:55:36 crc kubenswrapper[4769]: I0122 13:55:36.095426 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" event={"ID":"9342ab94-785a-427b-84d2-5ac6ff709531","Type":"ContainerStarted","Data":"069e806f288865a832dfd3b907c515fe4649083305998ee1d7f3d3940c2dd38c"} Jan 22 13:55:39 crc kubenswrapper[4769]: I0122 13:55:39.123164 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" event={"ID":"9342ab94-785a-427b-84d2-5ac6ff709531","Type":"ContainerStarted","Data":"293101b908d042393078034a7a5dcb7e5c47787f3f4afe360f5727515724f08b"} Jan 22 13:55:39 crc kubenswrapper[4769]: I0122 13:55:39.145762 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-z29kl" podStartSLOduration=1.833228905 podStartE2EDuration="4.14574572s" podCreationTimestamp="2026-01-22 13:55:35 +0000 UTC" firstStartedPulling="2026-01-22 13:55:35.81750133 +0000 UTC m=+715.228611259" lastFinishedPulling="2026-01-22 13:55:38.130018145 +0000 UTC m=+717.541128074" observedRunningTime="2026-01-22 13:55:39.142651203 +0000 UTC m=+718.553761132" watchObservedRunningTime="2026-01-22 13:55:39.14574572 +0000 UTC m=+718.556855649" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.138942 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xsnfh"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.140101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.142240 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c7r96" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.150636 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.151347 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.156138 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xsnfh"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.160153 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.165667 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.178759 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-v6r9x"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.179531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.263235 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.264022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.273090 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.273239 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zt9n2" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.273450 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.282122 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301542 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-dbus-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-nmstate-lock\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301624 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsr4x\" (UniqueName: \"kubernetes.io/projected/880459e4-297b-408b-8205-c2197bf19c18-kube-api-access-qsr4x\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301851 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-kube-api-access-jxbvk\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301910 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sdht\" (UniqueName: \"kubernetes.io/projected/fd9c945e-a392-4a96-8a06-893a09e8dc19-kube-api-access-2sdht\") pod \"nmstate-metrics-54757c584b-xsnfh\" (UID: \"fd9c945e-a392-4a96-8a06-893a09e8dc19\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.301960 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-ovs-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402703 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-ovs-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402763 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-dbus-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402808 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1eaf1c-9da8-4372-888f-ed8464d4313d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402834 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-nmstate-lock\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402852 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1eaf1c-9da8-4372-888f-ed8464d4313d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402832 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-ovs-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402877 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsr4x\" (UniqueName: \"kubernetes.io/projected/880459e4-297b-408b-8205-c2197bf19c18-kube-api-access-qsr4x\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.402880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-nmstate-lock\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403052 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6jx\" (UniqueName: \"kubernetes.io/projected/bd1eaf1c-9da8-4372-888f-ed8464d4313d-kube-api-access-lt6jx\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403074 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-dbus-socket\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403114 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-kube-api-access-jxbvk\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403153 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.403192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sdht\" (UniqueName: \"kubernetes.io/projected/fd9c945e-a392-4a96-8a06-893a09e8dc19-kube-api-access-2sdht\") pod \"nmstate-metrics-54757c584b-xsnfh\" (UID: \"fd9c945e-a392-4a96-8a06-893a09e8dc19\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: E0122 13:55:40.403343 4769 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 22 13:55:40 crc kubenswrapper[4769]: E0122 13:55:40.403392 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair podName:880459e4-297b-408b-8205-c2197bf19c18 nodeName:}" failed. No retries permitted until 2026-01-22 13:55:40.903374823 +0000 UTC m=+720.314484752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-64j27" (UID: "880459e4-297b-408b-8205-c2197bf19c18") : secret "openshift-nmstate-webhook" not found Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.422575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sdht\" (UniqueName: \"kubernetes.io/projected/fd9c945e-a392-4a96-8a06-893a09e8dc19-kube-api-access-2sdht\") pod \"nmstate-metrics-54757c584b-xsnfh\" (UID: \"fd9c945e-a392-4a96-8a06-893a09e8dc19\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.422690 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsr4x\" (UniqueName: \"kubernetes.io/projected/880459e4-297b-408b-8205-c2197bf19c18-kube-api-access-qsr4x\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.434900 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxbvk\" (UniqueName: \"kubernetes.io/projected/7e7ab7e8-7c34-4b26-9c19-33ae90a756ec-kube-api-access-jxbvk\") pod \"nmstate-handler-v6r9x\" (UID: \"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec\") " pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.455970 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.457925 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d5d467dd8-9dd6w"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.458531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.478480 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d5d467dd8-9dd6w"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.482080 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.482138 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.494075 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.504848 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt6jx\" (UniqueName: \"kubernetes.io/projected/bd1eaf1c-9da8-4372-888f-ed8464d4313d-kube-api-access-lt6jx\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.504944 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1eaf1c-9da8-4372-888f-ed8464d4313d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.504971 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1eaf1c-9da8-4372-888f-ed8464d4313d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.506184 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1eaf1c-9da8-4372-888f-ed8464d4313d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.508777 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1eaf1c-9da8-4372-888f-ed8464d4313d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.526624 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt6jx\" (UniqueName: \"kubernetes.io/projected/bd1eaf1c-9da8-4372-888f-ed8464d4313d-kube-api-access-lt6jx\") pod \"nmstate-console-plugin-7754f76f8b-t9pnx\" (UID: \"bd1eaf1c-9da8-4372-888f-ed8464d4313d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.585270 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606193 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-trusted-ca-bundle\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606242 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606272 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqh5h\" (UniqueName: \"kubernetes.io/projected/35f692b2-7216-401d-8a55-279589beda2a-kube-api-access-dqh5h\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606291 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-service-ca\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606325 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-console-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606363 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-oauth-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.606632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-oauth-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.677251 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-xsnfh"] Jan 22 13:55:40 crc kubenswrapper[4769]: W0122 13:55:40.685728 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd9c945e_a392_4a96_8a06_893a09e8dc19.slice/crio-4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03 WatchSource:0}: Error finding container 4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03: Status 404 returned error can't find the container with id 4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03 Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707384 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-console-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-oauth-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-oauth-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707505 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-trusted-ca-bundle\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707532 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707567 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqh5h\" (UniqueName: \"kubernetes.io/projected/35f692b2-7216-401d-8a55-279589beda2a-kube-api-access-dqh5h\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.707588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-service-ca\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708448 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-console-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-oauth-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708630 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-service-ca\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.708780 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35f692b2-7216-401d-8a55-279589beda2a-trusted-ca-bundle\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.727998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqh5h\" (UniqueName: \"kubernetes.io/projected/35f692b2-7216-401d-8a55-279589beda2a-kube-api-access-dqh5h\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.728854 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-serving-cert\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.729212 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/35f692b2-7216-401d-8a55-279589beda2a-console-oauth-config\") pod \"console-5d5d467dd8-9dd6w\" (UID: \"35f692b2-7216-401d-8a55-279589beda2a\") " pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.812028 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.834644 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx"] Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.913390 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:40 crc kubenswrapper[4769]: I0122 13:55:40.918864 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/880459e4-297b-408b-8205-c2197bf19c18-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-64j27\" (UID: \"880459e4-297b-408b-8205-c2197bf19c18\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.067358 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.121657 4769 scope.go:117] "RemoveContainer" containerID="3104553fb5aa42e836333e0998d4bb894a479a4adf589398bbdf1b42722c06a3" Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.140974 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v6r9x" event={"ID":"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec","Type":"ContainerStarted","Data":"e66f3bc9ebb33eaeb4a530134b347879f3218e5ed4f23520253f0c694fc8a18f"} Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.142814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" event={"ID":"bd1eaf1c-9da8-4372-888f-ed8464d4313d","Type":"ContainerStarted","Data":"a41050fb6fbd73d616919cb58ec9e77609770a44987fa43da1987488b161daa4"} Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.144080 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" event={"ID":"fd9c945e-a392-4a96-8a06-893a09e8dc19","Type":"ContainerStarted","Data":"4617d9a2743eeda43b3412fca9684921845011c28587163abeb03ce5f4ed7b03"} Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.211925 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d5d467dd8-9dd6w"] Jan 22 13:55:41 crc kubenswrapper[4769]: W0122 13:55:41.228684 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f692b2_7216_401d_8a55_279589beda2a.slice/crio-451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a WatchSource:0}: Error finding container 451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a: Status 404 returned error can't find the container with id 451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a Jan 22 13:55:41 crc kubenswrapper[4769]: I0122 13:55:41.246832 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27"] Jan 22 13:55:41 crc kubenswrapper[4769]: W0122 13:55:41.263656 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod880459e4_297b_408b_8205_c2197bf19c18.slice/crio-272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc WatchSource:0}: Error finding container 272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc: Status 404 returned error can't find the container with id 272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.157554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" event={"ID":"880459e4-297b-408b-8205-c2197bf19c18","Type":"ContainerStarted","Data":"272baa9c94b6e441c0a3c451c542467b4ca7ba033904be8778ca6b4a22c995dc"} Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.159433 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5d467dd8-9dd6w" event={"ID":"35f692b2-7216-401d-8a55-279589beda2a","Type":"ContainerStarted","Data":"8c215d89f033807952cc94109893a2deb3a0c11b0ecc1c5495156e88cf3fa24f"} Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.159466 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d5d467dd8-9dd6w" event={"ID":"35f692b2-7216-401d-8a55-279589beda2a","Type":"ContainerStarted","Data":"451b555a59aef0d9dbb265ad743ca2cc8ce908690287247b97f8ab7a1a0f031a"} Jan 22 13:55:42 crc kubenswrapper[4769]: I0122 13:55:42.176026 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d5d467dd8-9dd6w" podStartSLOduration=2.176007981 podStartE2EDuration="2.176007981s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:55:42.175792565 +0000 UTC m=+721.586902494" watchObservedRunningTime="2026-01-22 13:55:42.176007981 +0000 UTC m=+721.587117910" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.171500 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" event={"ID":"fd9c945e-a392-4a96-8a06-893a09e8dc19","Type":"ContainerStarted","Data":"d371cdda7780e170e717d3cb54842c56594eeda37df861323015ed2a09b1034d"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.172692 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-v6r9x" event={"ID":"7e7ab7e8-7c34-4b26-9c19-33ae90a756ec","Type":"ContainerStarted","Data":"cb8f3370fccddcdc502824964de63c781194da673b8dd45aec53cd4d40cd32dc"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.173254 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.176238 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" event={"ID":"bd1eaf1c-9da8-4372-888f-ed8464d4313d","Type":"ContainerStarted","Data":"c1203e1644af8987291fba5f98e354394eaac11495d7d41015064ec135de716a"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.177978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" event={"ID":"880459e4-297b-408b-8205-c2197bf19c18","Type":"ContainerStarted","Data":"e8d1c13a397ffb3088eb9b597d7615a904b01631c7149224133c8bf341a4e101"} Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.178328 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.206678 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-v6r9x" podStartSLOduration=1.230778186 podStartE2EDuration="4.206659588s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:40.534577018 +0000 UTC m=+719.945686947" lastFinishedPulling="2026-01-22 13:55:43.5104584 +0000 UTC m=+722.921568349" observedRunningTime="2026-01-22 13:55:44.189787717 +0000 UTC m=+723.600897646" watchObservedRunningTime="2026-01-22 13:55:44.206659588 +0000 UTC m=+723.617769517" Jan 22 13:55:44 crc kubenswrapper[4769]: I0122 13:55:44.208180 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-t9pnx" podStartSLOduration=1.5302433519999998 podStartE2EDuration="4.208169407s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:40.842125315 +0000 UTC m=+720.253235244" lastFinishedPulling="2026-01-22 13:55:43.52005136 +0000 UTC m=+722.931161299" observedRunningTime="2026-01-22 13:55:44.203409178 +0000 UTC m=+723.614519117" watchObservedRunningTime="2026-01-22 13:55:44.208169407 +0000 UTC m=+723.619279336" Jan 22 13:55:46 crc kubenswrapper[4769]: I0122 13:55:46.189726 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" event={"ID":"fd9c945e-a392-4a96-8a06-893a09e8dc19","Type":"ContainerStarted","Data":"de117665bf3196559046ca3868db77b1810705d365e0e73def649688a051f52e"} Jan 22 13:55:46 crc kubenswrapper[4769]: I0122 13:55:46.206607 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-xsnfh" podStartSLOduration=1.303999334 podStartE2EDuration="6.20658188s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:40.687602668 +0000 UTC m=+720.098712597" lastFinishedPulling="2026-01-22 13:55:45.590185214 +0000 UTC m=+725.001295143" observedRunningTime="2026-01-22 13:55:46.203405801 +0000 UTC m=+725.614515740" watchObservedRunningTime="2026-01-22 13:55:46.20658188 +0000 UTC m=+725.617691809" Jan 22 13:55:46 crc kubenswrapper[4769]: I0122 13:55:46.207217 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" podStartSLOduration=3.947649724 podStartE2EDuration="6.207211056s" podCreationTimestamp="2026-01-22 13:55:40 +0000 UTC" firstStartedPulling="2026-01-22 13:55:41.265970665 +0000 UTC m=+720.677080594" lastFinishedPulling="2026-01-22 13:55:43.525531957 +0000 UTC m=+722.936641926" observedRunningTime="2026-01-22 13:55:44.248568245 +0000 UTC m=+723.659678184" watchObservedRunningTime="2026-01-22 13:55:46.207211056 +0000 UTC m=+725.618320995" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.520019 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-v6r9x" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.813568 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.814429 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:50 crc kubenswrapper[4769]: I0122 13:55:50.821334 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:51 crc kubenswrapper[4769]: I0122 13:55:51.235239 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d5d467dd8-9dd6w" Jan 22 13:55:51 crc kubenswrapper[4769]: I0122 13:55:51.319597 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:56:01 crc kubenswrapper[4769]: I0122 13:56:01.073932 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-64j27" Jan 22 13:56:10 crc kubenswrapper[4769]: I0122 13:56:10.482406 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:56:10 crc kubenswrapper[4769]: I0122 13:56:10.482909 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:56:11 crc kubenswrapper[4769]: I0122 13:56:11.580523 4769 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.345313 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v"] Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.348201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.354079 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.365859 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v"] Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.370484 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-nwrtw" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" containerID="cri-o://b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" gracePeriod=15 Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.414273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.414331 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.414355 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.515244 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.515300 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.515331 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.516375 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.516559 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.540445 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.669885 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.741678 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nwrtw_9fa4c168-21ea-4f79-a600-7f3c8f656bd0/console/0.log" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.742032 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.863240 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v"] Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921156 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921197 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921242 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921258 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921479 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921552 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.921635 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") pod \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\" (UID: \"9fa4c168-21ea-4f79-a600-7f3c8f656bd0\") " Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922260 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922397 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca" (OuterVolumeSpecName: "service-ca") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922562 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.922585 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config" (OuterVolumeSpecName: "console-config") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.926711 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.927148 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 13:56:16 crc kubenswrapper[4769]: I0122 13:56:16.927160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc" (OuterVolumeSpecName: "kube-api-access-wt8zc") pod "9fa4c168-21ea-4f79-a600-7f3c8f656bd0" (UID: "9fa4c168-21ea-4f79-a600-7f3c8f656bd0"). InnerVolumeSpecName "kube-api-access-wt8zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023588 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023653 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023663 4769 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023672 4769 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023681 4769 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023689 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt8zc\" (UniqueName: \"kubernetes.io/projected/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-kube-api-access-wt8zc\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.023700 4769 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9fa4c168-21ea-4f79-a600-7f3c8f656bd0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.387304 4769 generic.go:334] "Generic (PLEG): container finished" podID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerID="890fbd70b9990cdab67db237f376067a636c58e36804cbc5514e8c0f16624b00" exitCode=0 Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.387395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"890fbd70b9990cdab67db237f376067a636c58e36804cbc5514e8c0f16624b00"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.387670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerStarted","Data":"5bd0bdffd5fe41dd37b42854f8cba8b2ef713aff82ddb5f084f8b150d8aaec8f"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389284 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-nwrtw_9fa4c168-21ea-4f79-a600-7f3c8f656bd0/console/0.log" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389356 4769 generic.go:334] "Generic (PLEG): container finished" podID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" exitCode=2 Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389385 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerDied","Data":"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389410 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-nwrtw" event={"ID":"9fa4c168-21ea-4f79-a600-7f3c8f656bd0","Type":"ContainerDied","Data":"261bd1091a2577bc464771e7c33703e0f325865e92a22082bfb502ff9ac9d6f2"} Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389431 4769 scope.go:117] "RemoveContainer" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.389506 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-nwrtw" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.409911 4769 scope.go:117] "RemoveContainer" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" Jan 22 13:56:17 crc kubenswrapper[4769]: E0122 13:56:17.410318 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089\": container with ID starting with b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089 not found: ID does not exist" containerID="b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.410358 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089"} err="failed to get container status \"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089\": rpc error: code = NotFound desc = could not find container \"b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089\": container with ID starting with b84cebc5b675e12661d4f7b983dcf05ea20ef3d051e2af2e9f65b08adbb73089 not found: ID does not exist" Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.422259 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:56:17 crc kubenswrapper[4769]: I0122 13:56:17.427167 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-nwrtw"] Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.699287 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:18 crc kubenswrapper[4769]: E0122 13:56:18.699945 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.699960 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.700082 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" containerName="console" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.700786 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.718247 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.845023 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.845262 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.845311 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.891541 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa4c168-21ea-4f79-a600-7f3c8f656bd0" path="/var/lib/kubelet/pods/9fa4c168-21ea-4f79-a600-7f3c8f656bd0/volumes" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946474 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946619 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946924 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.946968 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:18 crc kubenswrapper[4769]: I0122 13:56:18.968389 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"redhat-operators-bpmf9\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.078767 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.403087 4769 generic.go:334] "Generic (PLEG): container finished" podID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerID="c152540a14357f8370a3662a02aafa5e7b26afe456e69ee3ad50ed2522eaf692" exitCode=0 Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.403248 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"c152540a14357f8370a3662a02aafa5e7b26afe456e69ee3ad50ed2522eaf692"} Jan 22 13:56:19 crc kubenswrapper[4769]: I0122 13:56:19.489082 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.411540 4769 generic.go:334] "Generic (PLEG): container finished" podID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerID="730dcc795c905f62c1eed3b68862040bcbc4f79cce5d34ad5a4d9d2018a6070a" exitCode=0 Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.411718 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"730dcc795c905f62c1eed3b68862040bcbc4f79cce5d34ad5a4d9d2018a6070a"} Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.411990 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerStarted","Data":"106665bdbcb8a203e18701468576d0c52caf4507eea3613063f75100024b19fe"} Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.415755 4769 generic.go:334] "Generic (PLEG): container finished" podID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerID="df17619ffdf330b353464dece8965283d3ec4b8b77a08731fe7f06a1c92f3802" exitCode=0 Jan 22 13:56:20 crc kubenswrapper[4769]: I0122 13:56:20.415831 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"df17619ffdf330b353464dece8965283d3ec4b8b77a08731fe7f06a1c92f3802"} Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.422592 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerStarted","Data":"395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f"} Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.626192 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.781454 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") pod \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.781591 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") pod \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.781646 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") pod \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\" (UID: \"2bd12d13-4630-4e58-95dd-7e6b2bb89428\") " Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.782653 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle" (OuterVolumeSpecName: "bundle") pod "2bd12d13-4630-4e58-95dd-7e6b2bb89428" (UID: "2bd12d13-4630-4e58-95dd-7e6b2bb89428"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.786673 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb" (OuterVolumeSpecName: "kube-api-access-69wvb") pod "2bd12d13-4630-4e58-95dd-7e6b2bb89428" (UID: "2bd12d13-4630-4e58-95dd-7e6b2bb89428"). InnerVolumeSpecName "kube-api-access-69wvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.797143 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util" (OuterVolumeSpecName: "util") pod "2bd12d13-4630-4e58-95dd-7e6b2bb89428" (UID: "2bd12d13-4630-4e58-95dd-7e6b2bb89428"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.883388 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-util\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.883712 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2bd12d13-4630-4e58-95dd-7e6b2bb89428-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:21 crc kubenswrapper[4769]: I0122 13:56:21.883857 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69wvb\" (UniqueName: \"kubernetes.io/projected/2bd12d13-4630-4e58-95dd-7e6b2bb89428-kube-api-access-69wvb\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.429592 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.429581 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v" event={"ID":"2bd12d13-4630-4e58-95dd-7e6b2bb89428","Type":"ContainerDied","Data":"5bd0bdffd5fe41dd37b42854f8cba8b2ef713aff82ddb5f084f8b150d8aaec8f"} Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.430515 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd0bdffd5fe41dd37b42854f8cba8b2ef713aff82ddb5f084f8b150d8aaec8f" Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.431135 4769 generic.go:334] "Generic (PLEG): container finished" podID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerID="395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f" exitCode=0 Jan 22 13:56:22 crc kubenswrapper[4769]: I0122 13:56:22.431168 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f"} Jan 22 13:56:23 crc kubenswrapper[4769]: I0122 13:56:23.439225 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerStarted","Data":"32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375"} Jan 22 13:56:23 crc kubenswrapper[4769]: I0122 13:56:23.456098 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bpmf9" podStartSLOduration=2.764578658 podStartE2EDuration="5.456078784s" podCreationTimestamp="2026-01-22 13:56:18 +0000 UTC" firstStartedPulling="2026-01-22 13:56:20.41613883 +0000 UTC m=+759.827248759" lastFinishedPulling="2026-01-22 13:56:23.107638956 +0000 UTC m=+762.518748885" observedRunningTime="2026-01-22 13:56:23.455559931 +0000 UTC m=+762.866669870" watchObservedRunningTime="2026-01-22 13:56:23.456078784 +0000 UTC m=+762.867188713" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.080031 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.081948 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.139105 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:29 crc kubenswrapper[4769]: I0122 13:56:29.508182 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:30 crc kubenswrapper[4769]: I0122 13:56:30.487526 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:31 crc kubenswrapper[4769]: I0122 13:56:31.480166 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bpmf9" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" containerID="cri-o://32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375" gracePeriod=2 Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.487661 4769 generic.go:334] "Generic (PLEG): container finished" podID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerID="32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375" exitCode=0 Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.487711 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375"} Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.856707 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871008 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4"] Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871270 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871293 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871311 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-utilities" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871320 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-utilities" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871332 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="pull" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871340 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="pull" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871357 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="extract" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871365 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="extract" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871377 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="util" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871384 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="util" Jan 22 13:56:32 crc kubenswrapper[4769]: E0122 13:56:32.871393 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-content" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871401 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="extract-content" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871522 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" containerName="registry-server" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.871536 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd12d13-4630-4e58-95dd-7e6b2bb89428" containerName="extract" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.872025 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.875586 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879065 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879192 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.879331 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6rdbl" Jan 22 13:56:32 crc kubenswrapper[4769]: I0122 13:56:32.896745 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.025764 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") pod \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.025909 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") pod \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.025937 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") pod \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\" (UID: \"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35\") " Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026090 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-webhook-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026122 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-apiservice-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026147 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdrz\" (UniqueName: \"kubernetes.io/projected/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-kube-api-access-btdrz\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.026948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities" (OuterVolumeSpecName: "utilities") pod "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" (UID: "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.034220 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r" (OuterVolumeSpecName: "kube-api-access-zml5r") pod "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" (UID: "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35"). InnerVolumeSpecName "kube-api-access-zml5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.126850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-webhook-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.126921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-apiservice-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.126959 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btdrz\" (UniqueName: \"kubernetes.io/projected/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-kube-api-access-btdrz\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.127054 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.127072 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zml5r\" (UniqueName: \"kubernetes.io/projected/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-kube-api-access-zml5r\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.132826 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-apiservice-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.133333 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-webhook-cert\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.141352 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.142110 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.144728 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-h6ftx" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.144934 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.145072 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.145577 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.148381 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdrz\" (UniqueName: \"kubernetes.io/projected/0e40742e-231f-4f7b-aa4b-fb58332c3dbe-kube-api-access-btdrz\") pod \"metallb-operator-controller-manager-ddb77dbc9-z2nv4\" (UID: \"0e40742e-231f-4f7b-aa4b-fb58332c3dbe\") " pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.196106 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.227994 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjf4b\" (UniqueName: \"kubernetes.io/projected/5ee84f81-0260-4579-b602-c37bcf5cc7aa-kube-api-access-tjf4b\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.228062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.228089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-webhook-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.329550 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjf4b\" (UniqueName: \"kubernetes.io/projected/5ee84f81-0260-4579-b602-c37bcf5cc7aa-kube-api-access-tjf4b\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.329889 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.329917 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-webhook-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.334045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-webhook-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.353940 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ee84f81-0260-4579-b602-c37bcf5cc7aa-apiservice-cert\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.354005 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjf4b\" (UniqueName: \"kubernetes.io/projected/5ee84f81-0260-4579-b602-c37bcf5cc7aa-kube-api-access-tjf4b\") pod \"metallb-operator-webhook-server-7b46c7846-xbsl9\" (UID: \"5ee84f81-0260-4579-b602-c37bcf5cc7aa\") " pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.374319 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" (UID: "d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.435875 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.442619 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4"] Jan 22 13:56:33 crc kubenswrapper[4769]: W0122 13:56:33.447063 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e40742e_231f_4f7b_aa4b_fb58332c3dbe.slice/crio-3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6 WatchSource:0}: Error finding container 3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6: Status 404 returned error can't find the container with id 3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6 Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.482561 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.495176 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bpmf9" event={"ID":"d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35","Type":"ContainerDied","Data":"106665bdbcb8a203e18701468576d0c52caf4507eea3613063f75100024b19fe"} Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.495211 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bpmf9" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.495232 4769 scope.go:117] "RemoveContainer" containerID="32bcc0e004ca426455f2af36390c38c54188e7f45cbf190324c9729aec8c6375" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.497452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" event={"ID":"0e40742e-231f-4f7b-aa4b-fb58332c3dbe","Type":"ContainerStarted","Data":"3743dffb26594210bf4fd3edd69e1f1a060bfaa64696844553e6fd025b9ca9f6"} Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.516043 4769 scope.go:117] "RemoveContainer" containerID="395addc925f05ab2ea9342b7a8df05feea4229a5f9a22966d52b9cc3a729037f" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.524567 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.536932 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bpmf9"] Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.549105 4769 scope.go:117] "RemoveContainer" containerID="730dcc795c905f62c1eed3b68862040bcbc4f79cce5d34ad5a4d9d2018a6070a" Jan 22 13:56:33 crc kubenswrapper[4769]: I0122 13:56:33.702055 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9"] Jan 22 13:56:33 crc kubenswrapper[4769]: W0122 13:56:33.709991 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ee84f81_0260_4579_b602_c37bcf5cc7aa.slice/crio-3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586 WatchSource:0}: Error finding container 3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586: Status 404 returned error can't find the container with id 3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586 Jan 22 13:56:34 crc kubenswrapper[4769]: I0122 13:56:34.505814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" event={"ID":"5ee84f81-0260-4579-b602-c37bcf5cc7aa","Type":"ContainerStarted","Data":"3be07dbb093b3632190d0a57b6e62536c782011567550c10d0aaf9eb2457c586"} Jan 22 13:56:34 crc kubenswrapper[4769]: I0122 13:56:34.891708 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35" path="/var/lib/kubelet/pods/d8c3e1bd-9a95-4d12-8888-7e42cb8f9b35/volumes" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.481967 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.482530 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.482572 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.483094 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.483149 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17" gracePeriod=600 Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.548602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" event={"ID":"5ee84f81-0260-4579-b602-c37bcf5cc7aa","Type":"ContainerStarted","Data":"b142d9bc95b974a43acae0c663421bd459fe25de709c8e19a53858942214acd1"} Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.548984 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.551066 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" event={"ID":"0e40742e-231f-4f7b-aa4b-fb58332c3dbe","Type":"ContainerStarted","Data":"d7305f1836274804aee27874d01e68e216e546ee58c63359bf5ad545fb93fa4b"} Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.551204 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.571332 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" podStartSLOduration=1.342811561 podStartE2EDuration="7.571315284s" podCreationTimestamp="2026-01-22 13:56:33 +0000 UTC" firstStartedPulling="2026-01-22 13:56:33.712500108 +0000 UTC m=+773.123610037" lastFinishedPulling="2026-01-22 13:56:39.941003831 +0000 UTC m=+779.352113760" observedRunningTime="2026-01-22 13:56:40.567244363 +0000 UTC m=+779.978354292" watchObservedRunningTime="2026-01-22 13:56:40.571315284 +0000 UTC m=+779.982425213" Jan 22 13:56:40 crc kubenswrapper[4769]: I0122 13:56:40.590551 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" podStartSLOduration=2.123635992 podStartE2EDuration="8.590531724s" podCreationTimestamp="2026-01-22 13:56:32 +0000 UTC" firstStartedPulling="2026-01-22 13:56:33.449878474 +0000 UTC m=+772.860988403" lastFinishedPulling="2026-01-22 13:56:39.916774206 +0000 UTC m=+779.327884135" observedRunningTime="2026-01-22 13:56:40.587987411 +0000 UTC m=+779.999097340" watchObservedRunningTime="2026-01-22 13:56:40.590531724 +0000 UTC m=+780.001641653" Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.565469 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17" exitCode=0 Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.565936 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17"} Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.565986 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e"} Jan 22 13:56:41 crc kubenswrapper[4769]: I0122 13:56:41.566004 4769 scope.go:117] "RemoveContainer" containerID="7014a00da4fb8832772c2abca967236faf9013893d9fcbf3a4a715925f75ad7d" Jan 22 13:56:53 crc kubenswrapper[4769]: I0122 13:56:53.486427 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7b46c7846-xbsl9" Jan 22 13:57:13 crc kubenswrapper[4769]: I0122 13:57:13.198814 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-ddb77dbc9-z2nv4" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.051561 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5vm9t"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.053650 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.056325 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.056382 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-krt5h" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.056442 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.060661 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.061339 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.064256 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068831 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-sockets\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068884 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-startup\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics-certs\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbs6n\" (UniqueName: \"kubernetes.io/projected/877a13a0-eef8-4409-b421-e3a8c23abc8a-kube-api-access-kbs6n\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068975 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgp9b\" (UniqueName: \"kubernetes.io/projected/82c00d20-0e87-4f34-9cae-d454867c62a0-kube-api-access-wgp9b\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.068996 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.069178 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-reloader\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.069247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-conf\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.069326 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.076337 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.123292 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lwzgw"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.124389 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.127815 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7ccsc" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.128065 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.128209 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.128312 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.139178 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-qkpds"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.139988 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.142706 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170853 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170928 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-reloader\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170965 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-cert\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.170998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-conf\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171048 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4762d945-0720-43a9-8af2-0317ce89dda2-metallb-excludel2\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171080 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-metrics-certs\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171148 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lstr4\" (UniqueName: \"kubernetes.io/projected/4762d945-0720-43a9-8af2-0317ce89dda2-kube-api-access-lstr4\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-sockets\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171224 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cgt\" (UniqueName: \"kubernetes.io/projected/8fbbec23-1005-4364-bf82-8a646a24801a-kube-api-access-42cgt\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171257 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-startup\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171285 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics-certs\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbs6n\" (UniqueName: \"kubernetes.io/projected/877a13a0-eef8-4409-b421-e3a8c23abc8a-kube-api-access-kbs6n\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171350 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgp9b\" (UniqueName: \"kubernetes.io/projected/82c00d20-0e87-4f34-9cae-d454867c62a0-kube-api-access-wgp9b\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171417 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.171905 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.172200 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-sockets\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.172308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-reloader\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.172958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-conf\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.173088 4769 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.173154 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert podName:82c00d20-0e87-4f34-9cae-d454867c62a0 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:14.673135229 +0000 UTC m=+814.084245158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert") pod "frr-k8s-webhook-server-7df86c4f6c-9n85j" (UID: "82c00d20-0e87-4f34-9cae-d454867c62a0") : secret "frr-k8s-webhook-server-cert" not found Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.173521 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/877a13a0-eef8-4409-b421-e3a8c23abc8a-frr-startup\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.187578 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/877a13a0-eef8-4409-b421-e3a8c23abc8a-metrics-certs\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.195922 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-qkpds"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.198413 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbs6n\" (UniqueName: \"kubernetes.io/projected/877a13a0-eef8-4409-b421-e3a8c23abc8a-kube-api-access-kbs6n\") pod \"frr-k8s-5vm9t\" (UID: \"877a13a0-eef8-4409-b421-e3a8c23abc8a\") " pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.205866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgp9b\" (UniqueName: \"kubernetes.io/projected/82c00d20-0e87-4f34-9cae-d454867c62a0-kube-api-access-wgp9b\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272207 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4762d945-0720-43a9-8af2-0317ce89dda2-metallb-excludel2\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272295 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-metrics-certs\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lstr4\" (UniqueName: \"kubernetes.io/projected/4762d945-0720-43a9-8af2-0317ce89dda2-kube-api-access-lstr4\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272360 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cgt\" (UniqueName: \"kubernetes.io/projected/8fbbec23-1005-4364-bf82-8a646a24801a-kube-api-access-42cgt\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272395 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.272481 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-cert\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272669 4769 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272707 4769 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272726 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist podName:4762d945-0720-43a9-8af2-0317ce89dda2 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:14.772703336 +0000 UTC m=+814.183813265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist") pod "speaker-lwzgw" (UID: "4762d945-0720-43a9-8af2-0317ce89dda2") : secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.272784 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs podName:4762d945-0720-43a9-8af2-0317ce89dda2 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:14.772767867 +0000 UTC m=+814.183877796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs") pod "speaker-lwzgw" (UID: "4762d945-0720-43a9-8af2-0317ce89dda2") : secret "speaker-certs-secret" not found Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.273149 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4762d945-0720-43a9-8af2-0317ce89dda2-metallb-excludel2\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.276041 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.276126 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-metrics-certs\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.286329 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fbbec23-1005-4364-bf82-8a646a24801a-cert\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.291448 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cgt\" (UniqueName: \"kubernetes.io/projected/8fbbec23-1005-4364-bf82-8a646a24801a-kube-api-access-42cgt\") pod \"controller-6968d8fdc4-qkpds\" (UID: \"8fbbec23-1005-4364-bf82-8a646a24801a\") " pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.294145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lstr4\" (UniqueName: \"kubernetes.io/projected/4762d945-0720-43a9-8af2-0317ce89dda2-kube-api-access-lstr4\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.373508 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.454870 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.679102 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.689944 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82c00d20-0e87-4f34-9cae-d454867c62a0-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-9n85j\" (UID: \"82c00d20-0e87-4f34-9cae-d454867c62a0\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.705161 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-qkpds"] Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.767528 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"bf9baa78704a8825bcfad0bd10acbef54170e880d8b884e049f12093bc0c6993"} Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.768516 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qkpds" event={"ID":"8fbbec23-1005-4364-bf82-8a646a24801a","Type":"ContainerStarted","Data":"97d8cb24efa65ed90003b9c7a6d1f1cbfaa8b88a8d3a2c4bab2c9d1f27b64678"} Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.781012 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.781102 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.781208 4769 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: E0122 13:57:14.781249 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist podName:4762d945-0720-43a9-8af2-0317ce89dda2 nodeName:}" failed. No retries permitted until 2026-01-22 13:57:15.781236662 +0000 UTC m=+815.192346591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist") pod "speaker-lwzgw" (UID: "4762d945-0720-43a9-8af2-0317ce89dda2") : secret "metallb-memberlist" not found Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.786086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-metrics-certs\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:14 crc kubenswrapper[4769]: I0122 13:57:14.984636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.373907 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j"] Jan 22 13:57:15 crc kubenswrapper[4769]: W0122 13:57:15.378094 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82c00d20_0e87_4f34_9cae_d454867c62a0.slice/crio-68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d WatchSource:0}: Error finding container 68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d: Status 404 returned error can't find the container with id 68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.776017 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" event={"ID":"82c00d20-0e87-4f34-9cae-d454867c62a0","Type":"ContainerStarted","Data":"68a4b265091dd26e16945e69583e645f94fc709a02a7cdc0a38ba933a5eb3d4d"} Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.777891 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qkpds" event={"ID":"8fbbec23-1005-4364-bf82-8a646a24801a","Type":"ContainerStarted","Data":"2228558587eb0d6c954924fb70ce7853356dc45e5f9c1cc75078a449fc944c51"} Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.777920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-qkpds" event={"ID":"8fbbec23-1005-4364-bf82-8a646a24801a","Type":"ContainerStarted","Data":"27d43a10273b2050f21a3ce7386c578bb2fe88fd0491281d4323a945ed721cd1"} Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.779047 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.798419 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.801518 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-qkpds" podStartSLOduration=1.801489282 podStartE2EDuration="1.801489282s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:57:15.796815364 +0000 UTC m=+815.207925293" watchObservedRunningTime="2026-01-22 13:57:15.801489282 +0000 UTC m=+815.212599211" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.811992 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4762d945-0720-43a9-8af2-0317ce89dda2-memberlist\") pod \"speaker-lwzgw\" (UID: \"4762d945-0720-43a9-8af2-0317ce89dda2\") " pod="metallb-system/speaker-lwzgw" Jan 22 13:57:15 crc kubenswrapper[4769]: I0122 13:57:15.938636 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786478 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lwzgw" event={"ID":"4762d945-0720-43a9-8af2-0317ce89dda2","Type":"ContainerStarted","Data":"8c40d999d365cfb42d78b2541bff6e59ca12406d42729de3e879460e139fe2a6"} Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786545 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lwzgw" event={"ID":"4762d945-0720-43a9-8af2-0317ce89dda2","Type":"ContainerStarted","Data":"2c533603351da98406f4cf0e54ed1e8f6ac61300a2ca9063e969f80c7c28b07b"} Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786560 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lwzgw" event={"ID":"4762d945-0720-43a9-8af2-0317ce89dda2","Type":"ContainerStarted","Data":"3f8efede264931bfc2e40600bf8f74adefcfbfc12437fff9b34ce8e0d56d11ee"} Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.786835 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:16 crc kubenswrapper[4769]: I0122 13:57:16.809320 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lwzgw" podStartSLOduration=2.809299792 podStartE2EDuration="2.809299792s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:57:16.808366077 +0000 UTC m=+816.219476006" watchObservedRunningTime="2026-01-22 13:57:16.809299792 +0000 UTC m=+816.220409721" Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.829644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" event={"ID":"82c00d20-0e87-4f34-9cae-d454867c62a0","Type":"ContainerStarted","Data":"8edd3e266a2c9fb36355123a5a006e538490ca45d355b9d3c70071dd251745cb"} Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.830512 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.831534 4769 generic.go:334] "Generic (PLEG): container finished" podID="877a13a0-eef8-4409-b421-e3a8c23abc8a" containerID="3ddf68a58f5c9fea873fd5bdb5df851b316a079b68820236d3b921cc42eeb630" exitCode=0 Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.831589 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerDied","Data":"3ddf68a58f5c9fea873fd5bdb5df851b316a079b68820236d3b921cc42eeb630"} Jan 22 13:57:22 crc kubenswrapper[4769]: I0122 13:57:22.849843 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" podStartSLOduration=2.205915818 podStartE2EDuration="8.849826103s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="2026-01-22 13:57:15.382151959 +0000 UTC m=+814.793261888" lastFinishedPulling="2026-01-22 13:57:22.026062244 +0000 UTC m=+821.437172173" observedRunningTime="2026-01-22 13:57:22.845395941 +0000 UTC m=+822.256505880" watchObservedRunningTime="2026-01-22 13:57:22.849826103 +0000 UTC m=+822.260936032" Jan 22 13:57:23 crc kubenswrapper[4769]: I0122 13:57:23.838213 4769 generic.go:334] "Generic (PLEG): container finished" podID="877a13a0-eef8-4409-b421-e3a8c23abc8a" containerID="0058a5cb70907264a3bd840598d04dfd89eef9277b655c5bb5f7ffcc58fb8c08" exitCode=0 Jan 22 13:57:23 crc kubenswrapper[4769]: I0122 13:57:23.838260 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerDied","Data":"0058a5cb70907264a3bd840598d04dfd89eef9277b655c5bb5f7ffcc58fb8c08"} Jan 22 13:57:24 crc kubenswrapper[4769]: I0122 13:57:24.459944 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-qkpds" Jan 22 13:57:24 crc kubenswrapper[4769]: I0122 13:57:24.848197 4769 generic.go:334] "Generic (PLEG): container finished" podID="877a13a0-eef8-4409-b421-e3a8c23abc8a" containerID="357e5fab1e67b0264e8a717f0893477deaa40ba0df79be454535687b5ef66ab4" exitCode=0 Jan 22 13:57:24 crc kubenswrapper[4769]: I0122 13:57:24.848306 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerDied","Data":"357e5fab1e67b0264e8a717f0893477deaa40ba0df79be454535687b5ef66ab4"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859534 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"bf8b25ed283e88706b5eb9bd0a02bd919124739f28b86c203decfb0218d6c207"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859881 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"9e037a9aa0011434366e34154cef2f92ce2cc8ad9eaa421f2629c38e52a6f892"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859901 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"de38ddc38f913217f6bf8e96bb9374b6a83a7f650ab072caf046dc3b6fdcf370"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859909 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"de835798be4622074405bb08ccb35a8938baa18835b1c85228a8cd4dc0d8594d"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859916 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"9226fd54f07624c284f70d71dcff60a1a82bf49fff222edc61df42b1e92935a8"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.859923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5vm9t" event={"ID":"877a13a0-eef8-4409-b421-e3a8c23abc8a","Type":"ContainerStarted","Data":"5a5899357d3363d687ed9684517a18956c2f5b906036b055d572de360263aaf8"} Jan 22 13:57:25 crc kubenswrapper[4769]: I0122 13:57:25.885853 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5vm9t" podStartSLOduration=4.355966886 podStartE2EDuration="11.885831127s" podCreationTimestamp="2026-01-22 13:57:14 +0000 UTC" firstStartedPulling="2026-01-22 13:57:14.481348999 +0000 UTC m=+813.892458928" lastFinishedPulling="2026-01-22 13:57:22.01121324 +0000 UTC m=+821.422323169" observedRunningTime="2026-01-22 13:57:25.884943133 +0000 UTC m=+825.296053072" watchObservedRunningTime="2026-01-22 13:57:25.885831127 +0000 UTC m=+825.296941066" Jan 22 13:57:29 crc kubenswrapper[4769]: I0122 13:57:29.374316 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:29 crc kubenswrapper[4769]: I0122 13:57:29.419602 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:34 crc kubenswrapper[4769]: I0122 13:57:34.379076 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5vm9t" Jan 22 13:57:34 crc kubenswrapper[4769]: I0122 13:57:34.989722 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-9n85j" Jan 22 13:57:35 crc kubenswrapper[4769]: I0122 13:57:35.943749 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lwzgw" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.216314 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.217459 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.219068 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.219768 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z8tw5" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.222135 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.271631 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.319749 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"openstack-operator-index-mkxkq\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.421371 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"openstack-operator-index-mkxkq\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.439381 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"openstack-operator-index-mkxkq\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.537220 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.943948 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:39 crc kubenswrapper[4769]: I0122 13:57:39.958923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerStarted","Data":"32c2c017510f58056652d0e7ab9dafab7031691572f95ab6890b77211d93e11e"} Jan 22 13:57:42 crc kubenswrapper[4769]: I0122 13:57:42.793729 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:42 crc kubenswrapper[4769]: I0122 13:57:42.990612 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerStarted","Data":"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8"} Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.004208 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mkxkq" podStartSLOduration=1.601447917 podStartE2EDuration="4.004190292s" podCreationTimestamp="2026-01-22 13:57:39 +0000 UTC" firstStartedPulling="2026-01-22 13:57:39.951913503 +0000 UTC m=+839.363023432" lastFinishedPulling="2026-01-22 13:57:42.354655878 +0000 UTC m=+841.765765807" observedRunningTime="2026-01-22 13:57:43.002852316 +0000 UTC m=+842.413962275" watchObservedRunningTime="2026-01-22 13:57:43.004190292 +0000 UTC m=+842.415300231" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.394410 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-m6xzn"] Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.395127 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.403739 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m6xzn"] Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.571226 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkpd\" (UniqueName: \"kubernetes.io/projected/a2d7498a-59be-42c8-913e-d8c8c596828f-kube-api-access-6gkpd\") pod \"openstack-operator-index-m6xzn\" (UID: \"a2d7498a-59be-42c8-913e-d8c8c596828f\") " pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.673079 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkpd\" (UniqueName: \"kubernetes.io/projected/a2d7498a-59be-42c8-913e-d8c8c596828f-kube-api-access-6gkpd\") pod \"openstack-operator-index-m6xzn\" (UID: \"a2d7498a-59be-42c8-913e-d8c8c596828f\") " pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.691729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkpd\" (UniqueName: \"kubernetes.io/projected/a2d7498a-59be-42c8-913e-d8c8c596828f-kube-api-access-6gkpd\") pod \"openstack-operator-index-m6xzn\" (UID: \"a2d7498a-59be-42c8-913e-d8c8c596828f\") " pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.713680 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:43 crc kubenswrapper[4769]: I0122 13:57:43.951897 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-m6xzn"] Jan 22 13:57:43 crc kubenswrapper[4769]: W0122 13:57:43.961719 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2d7498a_59be_42c8_913e_d8c8c596828f.slice/crio-18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529 WatchSource:0}: Error finding container 18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529: Status 404 returned error can't find the container with id 18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529 Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.027105 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mkxkq" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" containerID="cri-o://cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" gracePeriod=2 Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.027405 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m6xzn" event={"ID":"a2d7498a-59be-42c8-913e-d8c8c596828f","Type":"ContainerStarted","Data":"18e5e75281352b55873429c637b537eba1f50aff022764ddac0779eb099fb529"} Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.337171 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.425134 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") pod \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\" (UID: \"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b\") " Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.430752 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s" (OuterVolumeSpecName: "kube-api-access-p2m5s") pod "b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" (UID: "b06de39c-14ea-4ee9-9e2f-9185d1c2af7b"). InnerVolumeSpecName "kube-api-access-p2m5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:57:44 crc kubenswrapper[4769]: I0122 13:57:44.526349 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2m5s\" (UniqueName: \"kubernetes.io/projected/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b-kube-api-access-p2m5s\") on node \"crc\" DevicePath \"\"" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034632 4769 generic.go:334] "Generic (PLEG): container finished" podID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" exitCode=0 Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034696 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mkxkq" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034711 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerDied","Data":"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8"} Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034739 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mkxkq" event={"ID":"b06de39c-14ea-4ee9-9e2f-9185d1c2af7b","Type":"ContainerDied","Data":"32c2c017510f58056652d0e7ab9dafab7031691572f95ab6890b77211d93e11e"} Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.034755 4769 scope.go:117] "RemoveContainer" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.038126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-m6xzn" event={"ID":"a2d7498a-59be-42c8-913e-d8c8c596828f","Type":"ContainerStarted","Data":"09bd46dc005e8a125d960a3e212bba6740b4f1e12b65a903b6e3c36f198449fb"} Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.049520 4769 scope.go:117] "RemoveContainer" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" Jan 22 13:57:45 crc kubenswrapper[4769]: E0122 13:57:45.050691 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8\": container with ID starting with cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8 not found: ID does not exist" containerID="cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.050733 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8"} err="failed to get container status \"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8\": rpc error: code = NotFound desc = could not find container \"cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8\": container with ID starting with cee6933352e999bdc00eb55b35f41b4a8c310ccb5b7dfbe67ab091352c1c98d8 not found: ID does not exist" Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.056724 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.061824 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mkxkq"] Jan 22 13:57:45 crc kubenswrapper[4769]: I0122 13:57:45.067952 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-m6xzn" podStartSLOduration=1.9862938190000001 podStartE2EDuration="2.067937147s" podCreationTimestamp="2026-01-22 13:57:43 +0000 UTC" firstStartedPulling="2026-01-22 13:57:43.972278959 +0000 UTC m=+843.383388888" lastFinishedPulling="2026-01-22 13:57:44.053922267 +0000 UTC m=+843.465032216" observedRunningTime="2026-01-22 13:57:45.067210397 +0000 UTC m=+844.478320326" watchObservedRunningTime="2026-01-22 13:57:45.067937147 +0000 UTC m=+844.479047076" Jan 22 13:57:46 crc kubenswrapper[4769]: I0122 13:57:46.892589 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" path="/var/lib/kubelet/pods/b06de39c-14ea-4ee9-9e2f-9185d1c2af7b/volumes" Jan 22 13:57:53 crc kubenswrapper[4769]: I0122 13:57:53.714190 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:53 crc kubenswrapper[4769]: I0122 13:57:53.714623 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:53 crc kubenswrapper[4769]: I0122 13:57:53.744447 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:54 crc kubenswrapper[4769]: I0122 13:57:54.118645 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-m6xzn" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.406201 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:57:56 crc kubenswrapper[4769]: E0122 13:57:56.407110 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.407135 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.407342 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06de39c-14ea-4ee9-9e2f-9185d1c2af7b" containerName="registry-server" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.408767 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.419980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.493742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.493847 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.493876 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.595461 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.595558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.595588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.596180 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.596215 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.619010 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"redhat-marketplace-vf99m\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.726986 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:57:56 crc kubenswrapper[4769]: I0122 13:57:56.951272 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:57:56 crc kubenswrapper[4769]: W0122 13:57:56.958293 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19e34c89_b2d2_4bd3_a9b1_eff968aefea7.slice/crio-2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b WatchSource:0}: Error finding container 2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b: Status 404 returned error can't find the container with id 2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b Jan 22 13:57:57 crc kubenswrapper[4769]: I0122 13:57:57.111186 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerStarted","Data":"2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b"} Jan 22 13:57:58 crc kubenswrapper[4769]: I0122 13:57:58.119154 4769 generic.go:334] "Generic (PLEG): container finished" podID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" exitCode=0 Jan 22 13:57:58 crc kubenswrapper[4769]: I0122 13:57:58.119212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6"} Jan 22 13:57:59 crc kubenswrapper[4769]: I0122 13:57:59.130221 4769 generic.go:334] "Generic (PLEG): container finished" podID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" exitCode=0 Jan 22 13:57:59 crc kubenswrapper[4769]: I0122 13:57:59.130291 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2"} Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.030092 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9"] Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.031841 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.034092 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vtwvl" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.043756 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9"] Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.140904 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerStarted","Data":"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e"} Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.141726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.141817 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.141845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.164430 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vf99m" podStartSLOduration=2.7386126109999998 podStartE2EDuration="4.164415598s" podCreationTimestamp="2026-01-22 13:57:56 +0000 UTC" firstStartedPulling="2026-01-22 13:57:58.120892455 +0000 UTC m=+857.532002394" lastFinishedPulling="2026-01-22 13:57:59.546695422 +0000 UTC m=+858.957805381" observedRunningTime="2026-01-22 13:58:00.162485818 +0000 UTC m=+859.573595747" watchObservedRunningTime="2026-01-22 13:58:00.164415598 +0000 UTC m=+859.575525527" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243062 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243170 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243619 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.243920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.260708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.390517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:00 crc kubenswrapper[4769]: I0122 13:58:00.774443 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9"] Jan 22 13:58:01 crc kubenswrapper[4769]: I0122 13:58:01.148121 4769 generic.go:334] "Generic (PLEG): container finished" podID="7585045d-5962-4b7d-903e-97f301a8fd47" containerID="84f76c48335d3300281282bed6e5d5410b7b65ceadfd7de286855f47cedb1ddf" exitCode=0 Jan 22 13:58:01 crc kubenswrapper[4769]: I0122 13:58:01.148208 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"84f76c48335d3300281282bed6e5d5410b7b65ceadfd7de286855f47cedb1ddf"} Jan 22 13:58:01 crc kubenswrapper[4769]: I0122 13:58:01.148433 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerStarted","Data":"9b9ce0b2453aa1487353b09cd103d42a91675a35517546b8099b00dea85c2be4"} Jan 22 13:58:02 crc kubenswrapper[4769]: I0122 13:58:02.167567 4769 generic.go:334] "Generic (PLEG): container finished" podID="7585045d-5962-4b7d-903e-97f301a8fd47" containerID="e82024a8ed83f437850abd823180c33a14c69bdac45a7c97bc85801c44fe4add" exitCode=0 Jan 22 13:58:02 crc kubenswrapper[4769]: I0122 13:58:02.167668 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"e82024a8ed83f437850abd823180c33a14c69bdac45a7c97bc85801c44fe4add"} Jan 22 13:58:03 crc kubenswrapper[4769]: I0122 13:58:03.186209 4769 generic.go:334] "Generic (PLEG): container finished" podID="7585045d-5962-4b7d-903e-97f301a8fd47" containerID="2a705bd50a434df768b1e6946a1bad83acaaac3593937a8650f6fd00ee6bfee8" exitCode=0 Jan 22 13:58:03 crc kubenswrapper[4769]: I0122 13:58:03.186555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"2a705bd50a434df768b1e6946a1bad83acaaac3593937a8650f6fd00ee6bfee8"} Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.432850 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.495876 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") pod \"7585045d-5962-4b7d-903e-97f301a8fd47\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.495935 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") pod \"7585045d-5962-4b7d-903e-97f301a8fd47\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.495957 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") pod \"7585045d-5962-4b7d-903e-97f301a8fd47\" (UID: \"7585045d-5962-4b7d-903e-97f301a8fd47\") " Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.497215 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle" (OuterVolumeSpecName: "bundle") pod "7585045d-5962-4b7d-903e-97f301a8fd47" (UID: "7585045d-5962-4b7d-903e-97f301a8fd47"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.501429 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv" (OuterVolumeSpecName: "kube-api-access-9pdjv") pod "7585045d-5962-4b7d-903e-97f301a8fd47" (UID: "7585045d-5962-4b7d-903e-97f301a8fd47"). InnerVolumeSpecName "kube-api-access-9pdjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.509738 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util" (OuterVolumeSpecName: "util") pod "7585045d-5962-4b7d-903e-97f301a8fd47" (UID: "7585045d-5962-4b7d-903e-97f301a8fd47"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.597779 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pdjv\" (UniqueName: \"kubernetes.io/projected/7585045d-5962-4b7d-903e-97f301a8fd47-kube-api-access-9pdjv\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.597900 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:04 crc kubenswrapper[4769]: I0122 13:58:04.597912 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7585045d-5962-4b7d-903e-97f301a8fd47-util\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:05 crc kubenswrapper[4769]: I0122 13:58:05.201301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" event={"ID":"7585045d-5962-4b7d-903e-97f301a8fd47","Type":"ContainerDied","Data":"9b9ce0b2453aa1487353b09cd103d42a91675a35517546b8099b00dea85c2be4"} Jan 22 13:58:05 crc kubenswrapper[4769]: I0122 13:58:05.201345 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9ce0b2453aa1487353b09cd103d42a91675a35517546b8099b00dea85c2be4" Jan 22 13:58:05 crc kubenswrapper[4769]: I0122 13:58:05.201414 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9" Jan 22 13:58:06 crc kubenswrapper[4769]: I0122 13:58:06.727347 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:06 crc kubenswrapper[4769]: I0122 13:58:06.727876 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:06 crc kubenswrapper[4769]: I0122 13:58:06.788414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:07 crc kubenswrapper[4769]: I0122 13:58:07.255259 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.452670 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h"] Jan 22 13:58:08 crc kubenswrapper[4769]: E0122 13:58:08.453244 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="extract" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453259 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="extract" Jan 22 13:58:08 crc kubenswrapper[4769]: E0122 13:58:08.453274 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="pull" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453280 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="pull" Jan 22 13:58:08 crc kubenswrapper[4769]: E0122 13:58:08.453293 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="util" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453300 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="util" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453407 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7585045d-5962-4b7d-903e-97f301a8fd47" containerName="extract" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.453835 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.457518 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qkrbx" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.488596 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h"] Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.548833 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxk4\" (UniqueName: \"kubernetes.io/projected/a48b50b3-ad51-4268-a926-bf2f1d7fd3f6-kube-api-access-rbxk4\") pod \"openstack-operator-controller-init-f94887bb5-8mc8h\" (UID: \"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6\") " pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.591356 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.650406 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxk4\" (UniqueName: \"kubernetes.io/projected/a48b50b3-ad51-4268-a926-bf2f1d7fd3f6-kube-api-access-rbxk4\") pod \"openstack-operator-controller-init-f94887bb5-8mc8h\" (UID: \"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6\") " pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.668716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxk4\" (UniqueName: \"kubernetes.io/projected/a48b50b3-ad51-4268-a926-bf2f1d7fd3f6-kube-api-access-rbxk4\") pod \"openstack-operator-controller-init-f94887bb5-8mc8h\" (UID: \"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6\") " pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:08 crc kubenswrapper[4769]: I0122 13:58:08.775190 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:09 crc kubenswrapper[4769]: I0122 13:58:09.223474 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vf99m" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" containerID="cri-o://32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" gracePeriod=2 Jan 22 13:58:09 crc kubenswrapper[4769]: I0122 13:58:09.230282 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h"] Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.108703 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.172536 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") pod \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.172598 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") pod \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.172628 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") pod \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\" (UID: \"19e34c89-b2d2-4bd3-a9b1-eff968aefea7\") " Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.173535 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities" (OuterVolumeSpecName: "utilities") pod "19e34c89-b2d2-4bd3-a9b1-eff968aefea7" (UID: "19e34c89-b2d2-4bd3-a9b1-eff968aefea7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.177910 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc" (OuterVolumeSpecName: "kube-api-access-r76nc") pod "19e34c89-b2d2-4bd3-a9b1-eff968aefea7" (UID: "19e34c89-b2d2-4bd3-a9b1-eff968aefea7"). InnerVolumeSpecName "kube-api-access-r76nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.182598 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.182636 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r76nc\" (UniqueName: \"kubernetes.io/projected/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-kube-api-access-r76nc\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.196661 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19e34c89-b2d2-4bd3-a9b1-eff968aefea7" (UID: "19e34c89-b2d2-4bd3-a9b1-eff968aefea7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.230473 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" event={"ID":"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6","Type":"ContainerStarted","Data":"c21357eb21f14705f81f6e0a52164ba4dfaea6d84839a44ef65b7b41522cbb28"} Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233766 4769 generic.go:334] "Generic (PLEG): container finished" podID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" exitCode=0 Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e"} Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233840 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf99m" event={"ID":"19e34c89-b2d2-4bd3-a9b1-eff968aefea7","Type":"ContainerDied","Data":"2e64b6e8a520f3843ec7c9ae5982056e8fcd13959c76144eb1858b806a0dcc3b"} Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233857 4769 scope.go:117] "RemoveContainer" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.233880 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf99m" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.251232 4769 scope.go:117] "RemoveContainer" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.262688 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.269876 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf99m"] Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.283630 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e34c89-b2d2-4bd3-a9b1-eff968aefea7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.287398 4769 scope.go:117] "RemoveContainer" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.317170 4769 scope.go:117] "RemoveContainer" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" Jan 22 13:58:10 crc kubenswrapper[4769]: E0122 13:58:10.319492 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e\": container with ID starting with 32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e not found: ID does not exist" containerID="32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.319546 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e"} err="failed to get container status \"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e\": rpc error: code = NotFound desc = could not find container \"32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e\": container with ID starting with 32d2c57e67c586f7ef14b5cdd5a595883b1783d97fd812f954865eaeeda5831e not found: ID does not exist" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.319576 4769 scope.go:117] "RemoveContainer" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" Jan 22 13:58:10 crc kubenswrapper[4769]: E0122 13:58:10.320015 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2\": container with ID starting with 2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2 not found: ID does not exist" containerID="2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.320086 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2"} err="failed to get container status \"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2\": rpc error: code = NotFound desc = could not find container \"2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2\": container with ID starting with 2b9202e4245ed91c20beeb633e4f1181139c96e55d88de4beac3b7578eb742b2 not found: ID does not exist" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.320129 4769 scope.go:117] "RemoveContainer" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" Jan 22 13:58:10 crc kubenswrapper[4769]: E0122 13:58:10.320503 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6\": container with ID starting with 13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6 not found: ID does not exist" containerID="13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.320550 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6"} err="failed to get container status \"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6\": rpc error: code = NotFound desc = could not find container \"13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6\": container with ID starting with 13b66d99502694caaa890328fbe448d01ad157fe9766454de4bcbc559f093be6 not found: ID does not exist" Jan 22 13:58:10 crc kubenswrapper[4769]: I0122 13:58:10.894916 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" path="/var/lib/kubelet/pods/19e34c89-b2d2-4bd3-a9b1-eff968aefea7/volumes" Jan 22 13:58:14 crc kubenswrapper[4769]: I0122 13:58:14.273565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" event={"ID":"a48b50b3-ad51-4268-a926-bf2f1d7fd3f6","Type":"ContainerStarted","Data":"41d130a51a375bacfd08438e3b3dda9d87e38aa7e29fbe6a9290bbec5e09c848"} Jan 22 13:58:14 crc kubenswrapper[4769]: I0122 13:58:14.274230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:14 crc kubenswrapper[4769]: I0122 13:58:14.310035 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" podStartSLOduration=2.314095205 podStartE2EDuration="6.310008308s" podCreationTimestamp="2026-01-22 13:58:08 +0000 UTC" firstStartedPulling="2026-01-22 13:58:09.242217444 +0000 UTC m=+868.653327373" lastFinishedPulling="2026-01-22 13:58:13.238130547 +0000 UTC m=+872.649240476" observedRunningTime="2026-01-22 13:58:14.302002539 +0000 UTC m=+873.713112508" watchObservedRunningTime="2026-01-22 13:58:14.310008308 +0000 UTC m=+873.721118257" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.778195 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f94887bb5-8mc8h" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868050 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:18 crc kubenswrapper[4769]: E0122 13:58:18.868262 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868272 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" Jan 22 13:58:18 crc kubenswrapper[4769]: E0122 13:58:18.868287 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-utilities" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868293 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-utilities" Jan 22 13:58:18 crc kubenswrapper[4769]: E0122 13:58:18.868303 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-content" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868310 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="extract-content" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.868415 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e34c89-b2d2-4bd3-a9b1-eff968aefea7" containerName="registry-server" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.869431 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:18 crc kubenswrapper[4769]: I0122 13:58:18.897506 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.011443 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.011498 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.011727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.112926 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113026 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113633 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.113662 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.137877 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"certified-operators-hgq6q\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.187907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:19 crc kubenswrapper[4769]: I0122 13:58:19.522067 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:20 crc kubenswrapper[4769]: I0122 13:58:20.321719 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerID="f64c48d9a4bfbecab5fb131323005a1c9b76790aa7fb985297132eec5177d55d" exitCode=0 Jan 22 13:58:20 crc kubenswrapper[4769]: I0122 13:58:20.321759 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"f64c48d9a4bfbecab5fb131323005a1c9b76790aa7fb985297132eec5177d55d"} Jan 22 13:58:20 crc kubenswrapper[4769]: I0122 13:58:20.321838 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerStarted","Data":"1ac9d31a1466ddc11a2d3ca5584af4c7f38778847f983d6cd1e3693f55b65e45"} Jan 22 13:58:21 crc kubenswrapper[4769]: I0122 13:58:21.328299 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerStarted","Data":"47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836"} Jan 22 13:58:22 crc kubenswrapper[4769]: I0122 13:58:22.334274 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerID="47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836" exitCode=0 Jan 22 13:58:22 crc kubenswrapper[4769]: I0122 13:58:22.334318 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836"} Jan 22 13:58:23 crc kubenswrapper[4769]: I0122 13:58:23.359352 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerStarted","Data":"3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80"} Jan 22 13:58:23 crc kubenswrapper[4769]: I0122 13:58:23.382304 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hgq6q" podStartSLOduration=2.933216006 podStartE2EDuration="5.382278008s" podCreationTimestamp="2026-01-22 13:58:18 +0000 UTC" firstStartedPulling="2026-01-22 13:58:20.323372435 +0000 UTC m=+879.734482354" lastFinishedPulling="2026-01-22 13:58:22.772434437 +0000 UTC m=+882.183544356" observedRunningTime="2026-01-22 13:58:23.381971499 +0000 UTC m=+882.793081518" watchObservedRunningTime="2026-01-22 13:58:23.382278008 +0000 UTC m=+882.793387937" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.188596 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.189239 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.234891 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.440594 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:29 crc kubenswrapper[4769]: I0122 13:58:29.483155 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.404232 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hgq6q" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" containerID="cri-o://3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80" gracePeriod=2 Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.890581 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.892168 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.916511 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.992274 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.992341 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:31 crc kubenswrapper[4769]: I0122 13:58:31.992392 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094059 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094146 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094179 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094644 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.094702 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.121303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"community-operators-hslhq\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.208776 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:32 crc kubenswrapper[4769]: I0122 13:58:32.514952 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:33 crc kubenswrapper[4769]: W0122 13:58:33.071916 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf4cf7c_e696_4123_af54_e8f96242dea3.slice/crio-0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7 WatchSource:0}: Error finding container 0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7: Status 404 returned error can't find the container with id 0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7 Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.419339 4769 generic.go:334] "Generic (PLEG): container finished" podID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerID="7c1458b4e0b7ea6519275d802b12eea4d4603db4985bd4c7ba57075375cf25a8" exitCode=0 Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.419476 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"7c1458b4e0b7ea6519275d802b12eea4d4603db4985bd4c7ba57075375cf25a8"} Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.419732 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerStarted","Data":"0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7"} Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.424479 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerID="3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80" exitCode=0 Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.424530 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80"} Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.609018 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.655265 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") pod \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.655344 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") pod \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.655371 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") pod \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\" (UID: \"c9017724-ecca-4b60-89eb-c21ac37ad9fd\") " Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.656907 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities" (OuterVolumeSpecName: "utilities") pod "c9017724-ecca-4b60-89eb-c21ac37ad9fd" (UID: "c9017724-ecca-4b60-89eb-c21ac37ad9fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.657063 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.661049 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj" (OuterVolumeSpecName: "kube-api-access-spsxj") pod "c9017724-ecca-4b60-89eb-c21ac37ad9fd" (UID: "c9017724-ecca-4b60-89eb-c21ac37ad9fd"). InnerVolumeSpecName "kube-api-access-spsxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.700503 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9017724-ecca-4b60-89eb-c21ac37ad9fd" (UID: "c9017724-ecca-4b60-89eb-c21ac37ad9fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.758363 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9017724-ecca-4b60-89eb-c21ac37ad9fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:33 crc kubenswrapper[4769]: I0122 13:58:33.758394 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spsxj\" (UniqueName: \"kubernetes.io/projected/c9017724-ecca-4b60-89eb-c21ac37ad9fd-kube-api-access-spsxj\") on node \"crc\" DevicePath \"\"" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.431386 4769 generic.go:334] "Generic (PLEG): container finished" podID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerID="ecd6b7d791c1fc22812115bf124726f845b9a1695d08053991cc5bf7429a01b6" exitCode=0 Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.431536 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"ecd6b7d791c1fc22812115bf124726f845b9a1695d08053991cc5bf7429a01b6"} Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.434680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hgq6q" event={"ID":"c9017724-ecca-4b60-89eb-c21ac37ad9fd","Type":"ContainerDied","Data":"1ac9d31a1466ddc11a2d3ca5584af4c7f38778847f983d6cd1e3693f55b65e45"} Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.434723 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hgq6q" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.434830 4769 scope.go:117] "RemoveContainer" containerID="3363a488503f9f15d115a1ab498ea56bc79e8c52c65cfe4a96c7a2d96e9fff80" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.467457 4769 scope.go:117] "RemoveContainer" containerID="47b3317b91b200d1fe0da34fe44cbd7828b32d291e0274869306e6f7a9f67836" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.483586 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.486934 4769 scope.go:117] "RemoveContainer" containerID="f64c48d9a4bfbecab5fb131323005a1c9b76790aa7fb985297132eec5177d55d" Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.501805 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hgq6q"] Jan 22 13:58:34 crc kubenswrapper[4769]: I0122 13:58:34.890165 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" path="/var/lib/kubelet/pods/c9017724-ecca-4b60-89eb-c21ac37ad9fd/volumes" Jan 22 13:58:35 crc kubenswrapper[4769]: I0122 13:58:35.444286 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerStarted","Data":"cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333"} Jan 22 13:58:35 crc kubenswrapper[4769]: I0122 13:58:35.459346 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hslhq" podStartSLOduration=3.002467977 podStartE2EDuration="4.459331343s" podCreationTimestamp="2026-01-22 13:58:31 +0000 UTC" firstStartedPulling="2026-01-22 13:58:33.423961383 +0000 UTC m=+892.835071322" lastFinishedPulling="2026-01-22 13:58:34.880824759 +0000 UTC m=+894.291934688" observedRunningTime="2026-01-22 13:58:35.456993882 +0000 UTC m=+894.868103811" watchObservedRunningTime="2026-01-22 13:58:35.459331343 +0000 UTC m=+894.870441272" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.481510 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q"] Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.482286 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-content" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482298 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-content" Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.482315 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-utilities" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482321 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="extract-utilities" Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.482329 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482334 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482453 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9017724-ecca-4b60-89eb-c21ac37ad9fd" containerName="registry-server" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.482870 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.485503 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jcqt2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.509191 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.509237 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.509895 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.520209 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.520882 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.521941 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-nvqlt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.523920 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjzm\" (UniqueName: \"kubernetes.io/projected/c6b325d8-50c6-411a-bc7f-938b284f0efb-kube-api-access-vgjzm\") pod \"designate-operator-controller-manager-b45d7bf98-rlcb9\" (UID: \"c6b325d8-50c6-411a-bc7f-938b284f0efb\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.523977 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl5dd\" (UniqueName: \"kubernetes.io/projected/bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049-kube-api-access-fl5dd\") pod \"cinder-operator-controller-manager-69cf5d4557-2q2v2\" (UID: \"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.524011 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/141f0476-23eb-4a43-a4ac-4d33c12bfb5b-kube-api-access-k9ss9\") pod \"barbican-operator-controller-manager-59dd8b7cbf-54q5q\" (UID: \"141f0476-23eb-4a43-a4ac-4d33c12bfb5b\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.524693 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-9tkrs" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.534681 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.535506 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.545902 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-2wkst" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.550422 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.560125 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.573540 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.574556 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.578284 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cppgt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.579825 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.580702 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.588251 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7b6pf" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.601042 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.609276 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.619497 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626299 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plxd9\" (UniqueName: \"kubernetes.io/projected/d40b03ae-0991-4364-85f3-89cf5e8d5686-kube-api-access-plxd9\") pod \"heat-operator-controller-manager-594c8c9d5d-brq9d\" (UID: \"d40b03ae-0991-4364-85f3-89cf5e8d5686\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626350 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8nq\" (UniqueName: \"kubernetes.io/projected/7d908338-dcdc-4423-b719-02d30f3834ed-kube-api-access-hs8nq\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rxgq\" (UID: \"7d908338-dcdc-4423-b719-02d30f3834ed\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626387 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjzm\" (UniqueName: \"kubernetes.io/projected/c6b325d8-50c6-411a-bc7f-938b284f0efb-kube-api-access-vgjzm\") pod \"designate-operator-controller-manager-b45d7bf98-rlcb9\" (UID: \"c6b325d8-50c6-411a-bc7f-938b284f0efb\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626417 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl5dd\" (UniqueName: \"kubernetes.io/projected/bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049-kube-api-access-fl5dd\") pod \"cinder-operator-controller-manager-69cf5d4557-2q2v2\" (UID: \"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626436 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whx6b\" (UniqueName: \"kubernetes.io/projected/ae11ee9d-5ccf-490d-b457-294820d6a337-kube-api-access-whx6b\") pod \"glance-operator-controller-manager-78fdd796fd-wvxp8\" (UID: \"ae11ee9d-5ccf-490d-b457-294820d6a337\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.626457 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/141f0476-23eb-4a43-a4ac-4d33c12bfb5b-kube-api-access-k9ss9\") pod \"barbican-operator-controller-manager-59dd8b7cbf-54q5q\" (UID: \"141f0476-23eb-4a43-a4ac-4d33c12bfb5b\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.631688 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.632442 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.637203 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.640513 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c2drt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.640687 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.655072 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.663668 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9ss9\" (UniqueName: \"kubernetes.io/projected/141f0476-23eb-4a43-a4ac-4d33c12bfb5b-kube-api-access-k9ss9\") pod \"barbican-operator-controller-manager-59dd8b7cbf-54q5q\" (UID: \"141f0476-23eb-4a43-a4ac-4d33c12bfb5b\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.669929 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjzm\" (UniqueName: \"kubernetes.io/projected/c6b325d8-50c6-411a-bc7f-938b284f0efb-kube-api-access-vgjzm\") pod \"designate-operator-controller-manager-b45d7bf98-rlcb9\" (UID: \"c6b325d8-50c6-411a-bc7f-938b284f0efb\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.681039 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.690318 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wpg5l" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.703479 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl5dd\" (UniqueName: \"kubernetes.io/projected/bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049-kube-api-access-fl5dd\") pod \"cinder-operator-controller-manager-69cf5d4557-2q2v2\" (UID: \"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.713879 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.738134 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plxd9\" (UniqueName: \"kubernetes.io/projected/d40b03ae-0991-4364-85f3-89cf5e8d5686-kube-api-access-plxd9\") pod \"heat-operator-controller-manager-594c8c9d5d-brq9d\" (UID: \"d40b03ae-0991-4364-85f3-89cf5e8d5686\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.738206 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs8nq\" (UniqueName: \"kubernetes.io/projected/7d908338-dcdc-4423-b719-02d30f3834ed-kube-api-access-hs8nq\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rxgq\" (UID: \"7d908338-dcdc-4423-b719-02d30f3834ed\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.738266 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whx6b\" (UniqueName: \"kubernetes.io/projected/ae11ee9d-5ccf-490d-b457-294820d6a337-kube-api-access-whx6b\") pod \"glance-operator-controller-manager-78fdd796fd-wvxp8\" (UID: \"ae11ee9d-5ccf-490d-b457-294820d6a337\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.757813 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.758586 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.759850 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plxd9\" (UniqueName: \"kubernetes.io/projected/d40b03ae-0991-4364-85f3-89cf5e8d5686-kube-api-access-plxd9\") pod \"heat-operator-controller-manager-594c8c9d5d-brq9d\" (UID: \"d40b03ae-0991-4364-85f3-89cf5e8d5686\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.762339 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xcl4h" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.772592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs8nq\" (UniqueName: \"kubernetes.io/projected/7d908338-dcdc-4423-b719-02d30f3834ed-kube-api-access-hs8nq\") pod \"horizon-operator-controller-manager-77d5c5b54f-8rxgq\" (UID: \"7d908338-dcdc-4423-b719-02d30f3834ed\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.792854 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whx6b\" (UniqueName: \"kubernetes.io/projected/ae11ee9d-5ccf-490d-b457-294820d6a337-kube-api-access-whx6b\") pod \"glance-operator-controller-manager-78fdd796fd-wvxp8\" (UID: \"ae11ee9d-5ccf-490d-b457-294820d6a337\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.796467 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.797223 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.799610 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-zr2bd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.806438 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.807264 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.808958 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-nm9km" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.810949 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.812132 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.814868 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.815507 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.817005 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-smdsm" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.817272 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-z9ctc" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.819362 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.823512 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.828949 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.839731 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.839802 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfwj\" (UniqueName: \"kubernetes.io/projected/13c33fdb-b388-4fdf-996c-544286f47a73-kube-api-access-sqfwj\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.839856 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782cz\" (UniqueName: \"kubernetes.io/projected/c367fcfb-38d9-4834-970d-7004d16c8249-kube-api-access-782cz\") pod \"ironic-operator-controller-manager-69d6c9f5b8-5njtw\" (UID: \"c367fcfb-38d9-4834-970d-7004d16c8249\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.840491 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.846528 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.853485 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.860090 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.861081 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.866190 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.866551 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-c6mn2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.869934 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.874018 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.880058 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.881720 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.881991 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.882908 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sn876" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.884463 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.890175 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-p88l8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.890405 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.897254 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.897316 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.898015 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.899721 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-glwh9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.904068 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.904191 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.921629 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.922613 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.933844 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.935037 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.939842 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-nb5bz" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.943340 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm"] Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.943627 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944376 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttq9d\" (UniqueName: \"kubernetes.io/projected/ebd5834b-ef11-40bb-9d15-6878767e7bef-kube-api-access-ttq9d\") pod \"neutron-operator-controller-manager-5d8f59fb49-x8dvt\" (UID: \"ebd5834b-ef11-40bb-9d15-6878767e7bef\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944408 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782cz\" (UniqueName: \"kubernetes.io/projected/c367fcfb-38d9-4834-970d-7004d16c8249-kube-api-access-782cz\") pod \"ironic-operator-controller-manager-69d6c9f5b8-5njtw\" (UID: \"c367fcfb-38d9-4834-970d-7004d16c8249\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944439 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znk26\" (UniqueName: \"kubernetes.io/projected/80a16478-da8a-4d2f-89df-163fada49abe-kube-api-access-znk26\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwhh9\" (UID: \"80a16478-da8a-4d2f-89df-163fada49abe\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944464 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944499 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5bv\" (UniqueName: \"kubernetes.io/projected/3d8a97d6-e3bd-49e0-bc78-024286cce303-kube-api-access-bt5bv\") pod \"manila-operator-controller-manager-78c6999f6f-ttb7f\" (UID: \"3d8a97d6-e3bd-49e0-bc78-024286cce303\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944519 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5j2b\" (UniqueName: \"kubernetes.io/projected/a32a1e6f-004c-4675-abed-10078b43492a-kube-api-access-p5j2b\") pod \"mariadb-operator-controller-manager-c87fff755-w77v6\" (UID: \"a32a1e6f-004c-4675-abed-10078b43492a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944535 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvd2\" (UniqueName: \"kubernetes.io/projected/d8d08194-af60-4614-b425-1b45340cd73b-kube-api-access-dbvd2\") pod \"keystone-operator-controller-manager-b8b6d4659-f2klg\" (UID: \"d8d08194-af60-4614-b425-1b45340cd73b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.944559 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqfwj\" (UniqueName: \"kubernetes.io/projected/13c33fdb-b388-4fdf-996c-544286f47a73-kube-api-access-sqfwj\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.945739 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:38 crc kubenswrapper[4769]: E0122 13:58:38.945781 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:39.445766329 +0000 UTC m=+898.856876248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.978757 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782cz\" (UniqueName: \"kubernetes.io/projected/c367fcfb-38d9-4834-970d-7004d16c8249-kube-api-access-782cz\") pod \"ironic-operator-controller-manager-69d6c9f5b8-5njtw\" (UID: \"c367fcfb-38d9-4834-970d-7004d16c8249\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:38 crc kubenswrapper[4769]: I0122 13:58:38.982400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqfwj\" (UniqueName: \"kubernetes.io/projected/13c33fdb-b388-4fdf-996c-544286f47a73-kube-api-access-sqfwj\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.029698 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.030563 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.033527 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-v76vj" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.039187 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045736 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5bv\" (UniqueName: \"kubernetes.io/projected/3d8a97d6-e3bd-49e0-bc78-024286cce303-kube-api-access-bt5bv\") pod \"manila-operator-controller-manager-78c6999f6f-ttb7f\" (UID: \"3d8a97d6-e3bd-49e0-bc78-024286cce303\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045779 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnphp\" (UniqueName: \"kubernetes.io/projected/f13c0d19-4c14-4897-bbc5-5c220d207e41-kube-api-access-dnphp\") pod \"ovn-operator-controller-manager-55db956ddc-ctf5z\" (UID: \"f13c0d19-4c14-4897-bbc5-5c220d207e41\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045827 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5j2b\" (UniqueName: \"kubernetes.io/projected/a32a1e6f-004c-4675-abed-10078b43492a-kube-api-access-p5j2b\") pod \"mariadb-operator-controller-manager-c87fff755-w77v6\" (UID: \"a32a1e6f-004c-4675-abed-10078b43492a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045851 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvd2\" (UniqueName: \"kubernetes.io/projected/d8d08194-af60-4614-b425-1b45340cd73b-kube-api-access-dbvd2\") pod \"keystone-operator-controller-manager-b8b6d4659-f2klg\" (UID: \"d8d08194-af60-4614-b425-1b45340cd73b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045891 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldb9n\" (UniqueName: \"kubernetes.io/projected/d931ff7f-f554-4249-bc34-2cd09fc97427-kube-api-access-ldb9n\") pod \"swift-operator-controller-manager-547cbdb99f-jbtsm\" (UID: \"d931ff7f-f554-4249-bc34-2cd09fc97427\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045913 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r95kw\" (UniqueName: \"kubernetes.io/projected/11299941-70c0-41a8-ad9c-5c4648c3aa95-kube-api-access-r95kw\") pod \"placement-operator-controller-manager-5d646b7d76-prfwv\" (UID: \"11299941-70c0-41a8-ad9c-5c4648c3aa95\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045935 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9r67\" (UniqueName: \"kubernetes.io/projected/8217a619-751c-4d07-a96c-ce3208f08e84-kube-api-access-r9r67\") pod \"octavia-operator-controller-manager-7bd9774b6-fzz6p\" (UID: \"8217a619-751c-4d07-a96c-ce3208f08e84\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.045996 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttq9d\" (UniqueName: \"kubernetes.io/projected/ebd5834b-ef11-40bb-9d15-6878767e7bef-kube-api-access-ttq9d\") pod \"neutron-operator-controller-manager-5d8f59fb49-x8dvt\" (UID: \"ebd5834b-ef11-40bb-9d15-6878767e7bef\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.046021 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csb7\" (UniqueName: \"kubernetes.io/projected/2b0a07de-4458-4970-a304-a608625bdebf-kube-api-access-6csb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.046062 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znk26\" (UniqueName: \"kubernetes.io/projected/80a16478-da8a-4d2f-89df-163fada49abe-kube-api-access-znk26\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwhh9\" (UID: \"80a16478-da8a-4d2f-89df-163fada49abe\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.057615 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.070159 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvd2\" (UniqueName: \"kubernetes.io/projected/d8d08194-af60-4614-b425-1b45340cd73b-kube-api-access-dbvd2\") pod \"keystone-operator-controller-manager-b8b6d4659-f2klg\" (UID: \"d8d08194-af60-4614-b425-1b45340cd73b\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.076475 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5bv\" (UniqueName: \"kubernetes.io/projected/3d8a97d6-e3bd-49e0-bc78-024286cce303-kube-api-access-bt5bv\") pod \"manila-operator-controller-manager-78c6999f6f-ttb7f\" (UID: \"3d8a97d6-e3bd-49e0-bc78-024286cce303\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.077536 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5j2b\" (UniqueName: \"kubernetes.io/projected/a32a1e6f-004c-4675-abed-10078b43492a-kube-api-access-p5j2b\") pod \"mariadb-operator-controller-manager-c87fff755-w77v6\" (UID: \"a32a1e6f-004c-4675-abed-10078b43492a\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.079519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znk26\" (UniqueName: \"kubernetes.io/projected/80a16478-da8a-4d2f-89df-163fada49abe-kube-api-access-znk26\") pod \"nova-operator-controller-manager-6b8bc8d87d-mwhh9\" (UID: \"80a16478-da8a-4d2f-89df-163fada49abe\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.092490 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttq9d\" (UniqueName: \"kubernetes.io/projected/ebd5834b-ef11-40bb-9d15-6878767e7bef-kube-api-access-ttq9d\") pod \"neutron-operator-controller-manager-5d8f59fb49-x8dvt\" (UID: \"ebd5834b-ef11-40bb-9d15-6878767e7bef\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.136985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.154965 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155210 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnphp\" (UniqueName: \"kubernetes.io/projected/f13c0d19-4c14-4897-bbc5-5c220d207e41-kube-api-access-dnphp\") pod \"ovn-operator-controller-manager-55db956ddc-ctf5z\" (UID: \"f13c0d19-4c14-4897-bbc5-5c220d207e41\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155241 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldb9n\" (UniqueName: \"kubernetes.io/projected/d931ff7f-f554-4249-bc34-2cd09fc97427-kube-api-access-ldb9n\") pod \"swift-operator-controller-manager-547cbdb99f-jbtsm\" (UID: \"d931ff7f-f554-4249-bc34-2cd09fc97427\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155280 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r95kw\" (UniqueName: \"kubernetes.io/projected/11299941-70c0-41a8-ad9c-5c4648c3aa95-kube-api-access-r95kw\") pod \"placement-operator-controller-manager-5d646b7d76-prfwv\" (UID: \"11299941-70c0-41a8-ad9c-5c4648c3aa95\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9r67\" (UniqueName: \"kubernetes.io/projected/8217a619-751c-4d07-a96c-ce3208f08e84-kube-api-access-r9r67\") pod \"octavia-operator-controller-manager-7bd9774b6-fzz6p\" (UID: \"8217a619-751c-4d07-a96c-ce3208f08e84\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155357 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csb7\" (UniqueName: \"kubernetes.io/projected/2b0a07de-4458-4970-a304-a608625bdebf-kube-api-access-6csb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.155385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdn8\" (UniqueName: \"kubernetes.io/projected/3c6369d9-2ecf-4187-bb10-76bde13ecd5d-kube-api-access-kqdn8\") pod \"telemetry-operator-controller-manager-85cd9769bb-gwzt2\" (UID: \"3c6369d9-2ecf-4187-bb10-76bde13ecd5d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.156165 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.157463 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.157538 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:39.657519044 +0000 UTC m=+899.068628973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.162129 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.170022 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.171526 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.172950 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.175737 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mwwp4" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.181281 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r95kw\" (UniqueName: \"kubernetes.io/projected/11299941-70c0-41a8-ad9c-5c4648c3aa95-kube-api-access-r95kw\") pod \"placement-operator-controller-manager-5d646b7d76-prfwv\" (UID: \"11299941-70c0-41a8-ad9c-5c4648c3aa95\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.185192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnphp\" (UniqueName: \"kubernetes.io/projected/f13c0d19-4c14-4897-bbc5-5c220d207e41-kube-api-access-dnphp\") pod \"ovn-operator-controller-manager-55db956ddc-ctf5z\" (UID: \"f13c0d19-4c14-4897-bbc5-5c220d207e41\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.190150 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.194414 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.190484 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.202597 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldb9n\" (UniqueName: \"kubernetes.io/projected/d931ff7f-f554-4249-bc34-2cd09fc97427-kube-api-access-ldb9n\") pod \"swift-operator-controller-manager-547cbdb99f-jbtsm\" (UID: \"d931ff7f-f554-4249-bc34-2cd09fc97427\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.202838 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9r67\" (UniqueName: \"kubernetes.io/projected/8217a619-751c-4d07-a96c-ce3208f08e84-kube-api-access-r9r67\") pod \"octavia-operator-controller-manager-7bd9774b6-fzz6p\" (UID: \"8217a619-751c-4d07-a96c-ce3208f08e84\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.209090 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.214855 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csb7\" (UniqueName: \"kubernetes.io/projected/2b0a07de-4458-4970-a304-a608625bdebf-kube-api-access-6csb7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.215485 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.237087 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.252631 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.252870 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-r848c" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.274226 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqdn8\" (UniqueName: \"kubernetes.io/projected/3c6369d9-2ecf-4187-bb10-76bde13ecd5d-kube-api-access-kqdn8\") pod \"telemetry-operator-controller-manager-85cd9769bb-gwzt2\" (UID: \"3c6369d9-2ecf-4187-bb10-76bde13ecd5d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.274301 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqwvz\" (UniqueName: \"kubernetes.io/projected/ed1198a5-a7fa-4ab4-9656-8e9700deec37-kube-api-access-sqwvz\") pod \"test-operator-controller-manager-69797bbcbd-pkl6g\" (UID: \"ed1198a5-a7fa-4ab4-9656-8e9700deec37\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.274567 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.346537 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqdn8\" (UniqueName: \"kubernetes.io/projected/3c6369d9-2ecf-4187-bb10-76bde13ecd5d-kube-api-access-kqdn8\") pod \"telemetry-operator-controller-manager-85cd9769bb-gwzt2\" (UID: \"3c6369d9-2ecf-4187-bb10-76bde13ecd5d\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.348697 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.349679 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.373576 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hlb79" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.375932 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.376116 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.376325 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.377038 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dbb\" (UniqueName: \"kubernetes.io/projected/31021ae3-dbb7-4ceb-8737-31052d849f0a-kube-api-access-59dbb\") pod \"watcher-operator-controller-manager-5ffb9c6597-b2w8p\" (UID: \"31021ae3-dbb7-4ceb-8737-31052d849f0a\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.377089 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqwvz\" (UniqueName: \"kubernetes.io/projected/ed1198a5-a7fa-4ab4-9656-8e9700deec37-kube-api-access-sqwvz\") pod \"test-operator-controller-manager-69797bbcbd-pkl6g\" (UID: \"ed1198a5-a7fa-4ab4-9656-8e9700deec37\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.387355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.408726 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqwvz\" (UniqueName: \"kubernetes.io/projected/ed1198a5-a7fa-4ab4-9656-8e9700deec37-kube-api-access-sqwvz\") pod \"test-operator-controller-manager-69797bbcbd-pkl6g\" (UID: \"ed1198a5-a7fa-4ab4-9656-8e9700deec37\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487579 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487685 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dbb\" (UniqueName: \"kubernetes.io/projected/31021ae3-dbb7-4ceb-8737-31052d849f0a-kube-api-access-59dbb\") pod \"watcher-operator-controller-manager-5ffb9c6597-b2w8p\" (UID: \"31021ae3-dbb7-4ceb-8737-31052d849f0a\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487716 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxbv\" (UniqueName: \"kubernetes.io/projected/a2bbc43c-9feb-4287-9e35-6f100c6644f6-kube-api-access-dcxbv\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.487764 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.490264 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.490331 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.490311029 +0000 UTC m=+899.901420958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.505119 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.511836 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.512672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.520397 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-lw4v5" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.541352 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.558593 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dbb\" (UniqueName: \"kubernetes.io/projected/31021ae3-dbb7-4ceb-8737-31052d849f0a-kube-api-access-59dbb\") pod \"watcher-operator-controller-manager-5ffb9c6597-b2w8p\" (UID: \"31021ae3-dbb7-4ceb-8737-31052d849f0a\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.599783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605084 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605038 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605136 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605197 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.10517629 +0000 UTC m=+899.516286219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605278 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcxbv\" (UniqueName: \"kubernetes.io/projected/a2bbc43c-9feb-4287-9e35-6f100c6644f6-kube-api-access-dcxbv\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.605662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg9m8\" (UniqueName: \"kubernetes.io/projected/14005034-1ce8-4d62-afbc-66cd1d0d9be1-kube-api-access-tg9m8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hv48h\" (UID: \"14005034-1ce8-4d62-afbc-66cd1d0d9be1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605503 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.605775 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.105763166 +0000 UTC m=+899.516873095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.612873 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.644776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcxbv\" (UniqueName: \"kubernetes.io/projected/a2bbc43c-9feb-4287-9e35-6f100c6644f6-kube-api-access-dcxbv\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.666972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.706583 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg9m8\" (UniqueName: \"kubernetes.io/projected/14005034-1ce8-4d62-afbc-66cd1d0d9be1-kube-api-access-tg9m8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hv48h\" (UID: \"14005034-1ce8-4d62-afbc-66cd1d0d9be1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.706632 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.706891 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: E0122 13:58:39.706943 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:40.70692882 +0000 UTC m=+900.118038749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.728134 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg9m8\" (UniqueName: \"kubernetes.io/projected/14005034-1ce8-4d62-afbc-66cd1d0d9be1-kube-api-access-tg9m8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-hv48h\" (UID: \"14005034-1ce8-4d62-afbc-66cd1d0d9be1\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.829101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.959450 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.987720 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq"] Jan 22 13:58:39 crc kubenswrapper[4769]: I0122 13:58:39.994914 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.013527 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.018721 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.120437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.121208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121445 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121519 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:41.121502455 +0000 UTC m=+900.532612384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121834 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.121868 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:41.121858824 +0000 UTC m=+900.532968753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.481677 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.481739 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.483962 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg"] Jan 22 13:58:40 crc kubenswrapper[4769]: W0122 13:58:40.486390 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8d08194_af60_4614_b425_1b45340cd73b.slice/crio-8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451 WatchSource:0}: Error finding container 8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451: Status 404 returned error can't find the container with id 8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451 Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.498775 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.506376 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.519964 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" event={"ID":"d8d08194-af60-4614-b425-1b45340cd73b","Type":"ContainerStarted","Data":"8e211b6acad458a8263752ea8cf0d4dda5d997b1f10fcd01c4df1ec4033fb451"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.521447 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.526093 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" event={"ID":"ae11ee9d-5ccf-490d-b457-294820d6a337","Type":"ContainerStarted","Data":"799998ea08e0e9bbfd48036a0c80aa79d93566022d40f3b7b707499213319f26"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.528019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.529686 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.529748 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:42.529729776 +0000 UTC m=+901.940839705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.530989 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" event={"ID":"ebd5834b-ef11-40bb-9d15-6878767e7bef","Type":"ContainerStarted","Data":"c349c0257cd7a9326d3d87df3ce033e911cfd3472e4d28d3efc7de87efe40657"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.538167 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" event={"ID":"7d908338-dcdc-4423-b719-02d30f3834ed","Type":"ContainerStarted","Data":"5ef13771deecc8c309d7762f6963cf36a214998b36dc692db2640ecda3261740"} Jan 22 13:58:40 crc kubenswrapper[4769]: W0122 13:58:40.540245 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8217a619_751c_4d07_a96c_ce3208f08e84.slice/crio-1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a WatchSource:0}: Error finding container 1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a: Status 404 returned error can't find the container with id 1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a Jan 22 13:58:40 crc kubenswrapper[4769]: W0122 13:58:40.542560 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c6369d9_2ecf_4187_bb10_76bde13ecd5d.slice/crio-cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3 WatchSource:0}: Error finding container cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3: Status 404 returned error can't find the container with id cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3 Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.545075 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.547585 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" event={"ID":"c367fcfb-38d9-4834-970d-7004d16c8249","Type":"ContainerStarted","Data":"b5785ce3c0ec2d8279f80e9310d8e179645d336badfcdb99c1cda8aa102ff702"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.559546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" event={"ID":"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049","Type":"ContainerStarted","Data":"6b52b5800978ebeeb1c45b8d6a8cd5f94d3285a287bed1bc73b9e9c33a62ec35"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.561371 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" event={"ID":"141f0476-23eb-4a43-a4ac-4d33c12bfb5b","Type":"ContainerStarted","Data":"b73001b0e9c2fbacf92a624cb9c8f69eae961c7638f8808b7207a3d6134f8f92"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.562197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" event={"ID":"c6b325d8-50c6-411a-bc7f-938b284f0efb","Type":"ContainerStarted","Data":"0ae85f4387bb09d6be1023e705a1beb47cf034173e4e3ef9f8ce2a4b79bd3fb9"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.562955 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" event={"ID":"d40b03ae-0991-4364-85f3-89cf5e8d5686","Type":"ContainerStarted","Data":"b56191106aeb936cd96b008014ab64102c13e10ce2dff5f478db4fec28fa8141"} Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.585008 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.599846 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.605448 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.610747 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv"] Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.615048 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-59dbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-b2w8p_openstack-operators(31021ae3-dbb7-4ceb-8737-31052d849f0a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.616187 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podUID="31021ae3-dbb7-4ceb-8737-31052d849f0a" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.616599 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z"] Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.616663 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldb9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-jbtsm_openstack-operators(d931ff7f-f554-4249-bc34-2cd09fc97427): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.618287 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podUID="d931ff7f-f554-4249-bc34-2cd09fc97427" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.619058 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znk26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-mwhh9_openstack-operators(80a16478-da8a-4d2f-89df-163fada49abe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.619446 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tg9m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-hv48h_openstack-operators(14005034-1ce8-4d62-afbc-66cd1d0d9be1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.619576 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r95kw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-prfwv_openstack-operators(11299941-70c0-41a8-ad9c-5c4648c3aa95): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.620187 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podUID="80a16478-da8a-4d2f-89df-163fada49abe" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.620487 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podUID="14005034-1ce8-4d62-afbc-66cd1d0d9be1" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.620985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podUID="11299941-70c0-41a8-ad9c-5c4648c3aa95" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.621634 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dnphp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-ctf5z_openstack-operators(f13c0d19-4c14-4897-bbc5-5c220d207e41): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.622962 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podUID="f13c0d19-4c14-4897-bbc5-5c220d207e41" Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.625919 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.631370 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h"] Jan 22 13:58:40 crc kubenswrapper[4769]: I0122 13:58:40.738190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.738380 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:40 crc kubenswrapper[4769]: E0122 13:58:40.738452 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:42.73843188 +0000 UTC m=+902.149541809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.146728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.147111 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147243 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147292 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:43.147275236 +0000 UTC m=+902.558385165 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147640 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.147679 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:43.147659836 +0000 UTC m=+902.558769765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.573037 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" event={"ID":"ed1198a5-a7fa-4ab4-9656-8e9700deec37","Type":"ContainerStarted","Data":"404cb91568c372461bba865aeb8b5fe1b216c271d1652940359fb48dab557cb3"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.574554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" event={"ID":"3d8a97d6-e3bd-49e0-bc78-024286cce303","Type":"ContainerStarted","Data":"1560051fd9396015c3821b45a37ac2eb5f38df31f66186026e831be5db48b178"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.575945 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" event={"ID":"8217a619-751c-4d07-a96c-ce3208f08e84","Type":"ContainerStarted","Data":"1e96bdc7fa11a79bdb532015357913b793cf454020be383e6e08d5c5cf70e34a"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.577580 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" event={"ID":"11299941-70c0-41a8-ad9c-5c4648c3aa95","Type":"ContainerStarted","Data":"78a36011e50eeea129f34b1d97d83c27efe609521c55b88920169e70d818d533"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.583181 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podUID="11299941-70c0-41a8-ad9c-5c4648c3aa95" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.583182 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" event={"ID":"80a16478-da8a-4d2f-89df-163fada49abe","Type":"ContainerStarted","Data":"28afbba3d9e8a3dd073b655e22ecfea05e5436d84c43581420e67d363507ba3d"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.586333 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podUID="80a16478-da8a-4d2f-89df-163fada49abe" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.586562 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" event={"ID":"31021ae3-dbb7-4ceb-8737-31052d849f0a","Type":"ContainerStarted","Data":"b9149c2c462ac76241b7958b988412ef09cf6085d8a01901aff67b47c8d763c0"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.587854 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podUID="31021ae3-dbb7-4ceb-8737-31052d849f0a" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.587892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" event={"ID":"d931ff7f-f554-4249-bc34-2cd09fc97427","Type":"ContainerStarted","Data":"ddc61e35bd61dede929a152277955adafeb3ff8ce918aec58cc9f7b823b8336a"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.589313 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podUID="d931ff7f-f554-4249-bc34-2cd09fc97427" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.589835 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" event={"ID":"f13c0d19-4c14-4897-bbc5-5c220d207e41","Type":"ContainerStarted","Data":"148747892a47776f1b0cb5f392e6cacf2f02648d0926bebde9daafc560a42863"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.590859 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" event={"ID":"a32a1e6f-004c-4675-abed-10078b43492a","Type":"ContainerStarted","Data":"c4e99c31781ef758d4fb4f4acc26b08431f5b29c047db8d9d0677ce02a928a4e"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.591013 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podUID="f13c0d19-4c14-4897-bbc5-5c220d207e41" Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.593427 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" event={"ID":"3c6369d9-2ecf-4187-bb10-76bde13ecd5d","Type":"ContainerStarted","Data":"cc2292226f835b99f454e647917e17c67b71020967683c449bde66a2f08937b3"} Jan 22 13:58:41 crc kubenswrapper[4769]: I0122 13:58:41.594584 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" event={"ID":"14005034-1ce8-4d62-afbc-66cd1d0d9be1","Type":"ContainerStarted","Data":"f27a66b4d9c86597d51f5e04be69641aa97a3f921f3d9981d997cb29bcc706d9"} Jan 22 13:58:41 crc kubenswrapper[4769]: E0122 13:58:41.596231 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podUID="14005034-1ce8-4d62-afbc-66cd1d0d9be1" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.209836 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.209891 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.267857 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.571352 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.571585 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.571656 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:46.571630286 +0000 UTC m=+905.982740215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.619854 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podUID="11299941-70c0-41a8-ad9c-5c4648c3aa95" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.619898 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podUID="d931ff7f-f554-4249-bc34-2cd09fc97427" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.620294 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podUID="31021ae3-dbb7-4ceb-8737-31052d849f0a" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.620710 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podUID="80a16478-da8a-4d2f-89df-163fada49abe" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.621854 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podUID="14005034-1ce8-4d62-afbc-66cd1d0d9be1" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.622947 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podUID="f13c0d19-4c14-4897-bbc5-5c220d207e41" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.722202 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.775533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.775734 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: E0122 13:58:42.775849 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:46.775829824 +0000 UTC m=+906.186939743 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:42 crc kubenswrapper[4769]: I0122 13:58:42.781044 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:58:43 crc kubenswrapper[4769]: I0122 13:58:43.183326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:43 crc kubenswrapper[4769]: I0122 13:58:43.183440 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183583 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183603 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183641 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:47.183620912 +0000 UTC m=+906.594730841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:43 crc kubenswrapper[4769]: E0122 13:58:43.183692 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:47.183681484 +0000 UTC m=+906.594791423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:44 crc kubenswrapper[4769]: I0122 13:58:44.628909 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hslhq" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" containerID="cri-o://cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" gracePeriod=2 Jan 22 13:58:45 crc kubenswrapper[4769]: I0122 13:58:45.647077 4769 generic.go:334] "Generic (PLEG): container finished" podID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" exitCode=0 Jan 22 13:58:45 crc kubenswrapper[4769]: I0122 13:58:45.647109 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333"} Jan 22 13:58:46 crc kubenswrapper[4769]: I0122 13:58:46.635489 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.635683 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.635756 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert podName:13c33fdb-b388-4fdf-996c-544286f47a73 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:54.635739146 +0000 UTC m=+914.046849095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert") pod "infra-operator-controller-manager-54ccf4f85d-zt4sd" (UID: "13c33fdb-b388-4fdf-996c-544286f47a73") : secret "infra-operator-webhook-server-cert" not found Jan 22 13:58:46 crc kubenswrapper[4769]: I0122 13:58:46.838420 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.838605 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:46 crc kubenswrapper[4769]: E0122 13:58:46.838693 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:58:54.83867586 +0000 UTC m=+914.249785799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: I0122 13:58:47.245227 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:47 crc kubenswrapper[4769]: I0122 13:58:47.245695 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.245448 4769 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.245904 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:55.245877123 +0000 UTC m=+914.656987262 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "metrics-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.245751 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:47 crc kubenswrapper[4769]: E0122 13:58:47.246033 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:58:55.246000806 +0000 UTC m=+914.657110735 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.210564 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.211530 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.211932 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 13:58:52 crc kubenswrapper[4769]: E0122 13:58:52.211961 4769 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hslhq" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.640669 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.651096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/13c33fdb-b388-4fdf-996c-544286f47a73-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-zt4sd\" (UID: \"13c33fdb-b388-4fdf-996c-544286f47a73\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.842645 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:58:54 crc kubenswrapper[4769]: E0122 13:58:54.842881 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:54 crc kubenswrapper[4769]: E0122 13:58:54.842969 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert podName:2b0a07de-4458-4970-a304-a608625bdebf nodeName:}" failed. No retries permitted until 2026-01-22 13:59:10.842945111 +0000 UTC m=+930.254055040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" (UID: "2b0a07de-4458-4970-a304-a608625bdebf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.906463 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c2drt" Jan 22 13:58:54 crc kubenswrapper[4769]: I0122 13:58:54.915209 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:58:55 crc kubenswrapper[4769]: I0122 13:58:55.248312 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:55 crc kubenswrapper[4769]: I0122 13:58:55.248766 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:55 crc kubenswrapper[4769]: E0122 13:58:55.248925 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 13:58:55 crc kubenswrapper[4769]: E0122 13:58:55.249010 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs podName:a2bbc43c-9feb-4287-9e35-6f100c6644f6 nodeName:}" failed. No retries permitted until 2026-01-22 13:59:11.248990681 +0000 UTC m=+930.660100610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs") pod "openstack-operator-controller-manager-54d678f547-4dd5j" (UID: "a2bbc43c-9feb-4287-9e35-6f100c6644f6") : secret "webhook-server-cert" not found Jan 22 13:58:55 crc kubenswrapper[4769]: I0122 13:58:55.253538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-metrics-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.285863 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.286166 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r9r67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-fzz6p_openstack-operators(8217a619-751c-4d07-a96c-ce3208f08e84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.287586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" podUID="8217a619-751c-4d07-a96c-ce3208f08e84" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.713596 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" podUID="8217a619-751c-4d07-a96c-ce3208f08e84" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.836272 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.836494 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgjzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-rlcb9_openstack-operators(c6b325d8-50c6-411a-bc7f-938b284f0efb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:56 crc kubenswrapper[4769]: E0122 13:58:56.837976 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" podUID="c6b325d8-50c6-411a-bc7f-938b284f0efb" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.513860 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.515003 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bt5bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-ttb7f_openstack-operators(3d8a97d6-e3bd-49e0-bc78-024286cce303): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.516391 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" podUID="3d8a97d6-e3bd-49e0-bc78-024286cce303" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.719238 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" podUID="c6b325d8-50c6-411a-bc7f-938b284f0efb" Jan 22 13:58:57 crc kubenswrapper[4769]: E0122 13:58:57.719424 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" podUID="3d8a97d6-e3bd-49e0-bc78-024286cce303" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.445876 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.446156 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-plxd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-brq9d_openstack-operators(d40b03ae-0991-4364-85f3-89cf5e8d5686): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.447618 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" podUID="d40b03ae-0991-4364-85f3-89cf5e8d5686" Jan 22 13:58:58 crc kubenswrapper[4769]: E0122 13:58:58.726018 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" podUID="d40b03ae-0991-4364-85f3-89cf5e8d5686" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.560817 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.561247 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ttq9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-x8dvt_openstack-operators(ebd5834b-ef11-40bb-9d15-6878767e7bef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.562361 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" podUID="ebd5834b-ef11-40bb-9d15-6878767e7bef" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.614508 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.635874 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") pod \"8bf4cf7c-e696-4123-af54-e8f96242dea3\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.636011 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") pod \"8bf4cf7c-e696-4123-af54-e8f96242dea3\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.636058 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") pod \"8bf4cf7c-e696-4123-af54-e8f96242dea3\" (UID: \"8bf4cf7c-e696-4123-af54-e8f96242dea3\") " Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.641352 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities" (OuterVolumeSpecName: "utilities") pod "8bf4cf7c-e696-4123-af54-e8f96242dea3" (UID: "8bf4cf7c-e696-4123-af54-e8f96242dea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.656895 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf" (OuterVolumeSpecName: "kube-api-access-d4nxf") pod "8bf4cf7c-e696-4123-af54-e8f96242dea3" (UID: "8bf4cf7c-e696-4123-af54-e8f96242dea3"). InnerVolumeSpecName "kube-api-access-d4nxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.728194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bf4cf7c-e696-4123-af54-e8f96242dea3" (UID: "8bf4cf7c-e696-4123-af54-e8f96242dea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.741553 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4nxf\" (UniqueName: \"kubernetes.io/projected/8bf4cf7c-e696-4123-af54-e8f96242dea3-kube-api-access-d4nxf\") on node \"crc\" DevicePath \"\"" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.741587 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.741597 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4cf7c-e696-4123-af54-e8f96242dea3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.753375 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hslhq" event={"ID":"8bf4cf7c-e696-4123-af54-e8f96242dea3","Type":"ContainerDied","Data":"0c1552ad818b2e1be914c6f1cf75464188673db6cbd965f9b19cec1319993de7"} Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.753434 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hslhq" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.753471 4769 scope.go:117] "RemoveContainer" containerID="cdbf7f7f6a90921d32f3d4d11232230d895172702918c94b29968a993593d333" Jan 22 13:59:00 crc kubenswrapper[4769]: E0122 13:59:00.757016 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" podUID="ebd5834b-ef11-40bb-9d15-6878767e7bef" Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.790642 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.795669 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hslhq"] Jan 22 13:59:00 crc kubenswrapper[4769]: I0122 13:59:00.891832 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" path="/var/lib/kubelet/pods/8bf4cf7c-e696-4123-af54-e8f96242dea3/volumes" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.331423 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.331961 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dbvd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-f2klg_openstack-operators(d8d08194-af60-4614-b425-1b45340cd73b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.333153 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" podUID="d8d08194-af60-4614-b425-1b45340cd73b" Jan 22 13:59:02 crc kubenswrapper[4769]: E0122 13:59:02.767586 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" podUID="d8d08194-af60-4614-b425-1b45340cd73b" Jan 22 13:59:06 crc kubenswrapper[4769]: I0122 13:59:06.936616 4769 scope.go:117] "RemoveContainer" containerID="ecd6b7d791c1fc22812115bf124726f845b9a1695d08053991cc5bf7429a01b6" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.335719 4769 scope.go:117] "RemoveContainer" containerID="7c1458b4e0b7ea6519275d802b12eea4d4603db4985bd4c7ba57075375cf25a8" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.744042 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd"] Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.799704 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" event={"ID":"ae11ee9d-5ccf-490d-b457-294820d6a337","Type":"ContainerStarted","Data":"ad7ec24d398406d1040ff7f36144f2a8ca799d9beebc3696ccd828dc5260dc4f"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.800626 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.802314 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" event={"ID":"a32a1e6f-004c-4675-abed-10078b43492a","Type":"ContainerStarted","Data":"c8df860d085292707a94865925bc76f74eb2adf5f3b264b32862738bb2757fce"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.802811 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.826134 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" event={"ID":"3c6369d9-2ecf-4187-bb10-76bde13ecd5d","Type":"ContainerStarted","Data":"7a32e1edeefff72ca7ad2bea005d634c3017c761de4476668101d38d375c7823"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.826284 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.834126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" event={"ID":"c367fcfb-38d9-4834-970d-7004d16c8249","Type":"ContainerStarted","Data":"ff8a471d8799793a319e5c9a7f14a0b49fad3533484e2fe58f7f47cbb46aa5b2"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.834771 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.853600 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" podStartSLOduration=6.590314652 podStartE2EDuration="29.853582591s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.774627442 +0000 UTC m=+899.185737371" lastFinishedPulling="2026-01-22 13:59:03.037895381 +0000 UTC m=+922.449005310" observedRunningTime="2026-01-22 13:59:07.82800703 +0000 UTC m=+927.239116959" watchObservedRunningTime="2026-01-22 13:59:07.853582591 +0000 UTC m=+927.264692520" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.858155 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" podStartSLOduration=7.346294349 podStartE2EDuration="29.85813363s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.526044519 +0000 UTC m=+899.937154438" lastFinishedPulling="2026-01-22 13:59:03.03788379 +0000 UTC m=+922.448993719" observedRunningTime="2026-01-22 13:59:07.853470718 +0000 UTC m=+927.264580657" watchObservedRunningTime="2026-01-22 13:59:07.85813363 +0000 UTC m=+927.269243559" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.862536 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" event={"ID":"ed1198a5-a7fa-4ab4-9656-8e9700deec37","Type":"ContainerStarted","Data":"621d9d45842fa5ef8fa011440ec24b62fbd43b5ab35143315d77bcf3d9cfeaea"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.863369 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.864710 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" event={"ID":"13c33fdb-b388-4fdf-996c-544286f47a73","Type":"ContainerStarted","Data":"3c9258ff3e30066454f1e0fe0b06fcab9da82c786502c650c4f2b7365b9e3fb2"} Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.879609 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" podStartSLOduration=7.387949936 podStartE2EDuration="29.879591032s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.546381518 +0000 UTC m=+899.957491447" lastFinishedPulling="2026-01-22 13:59:03.038022614 +0000 UTC m=+922.449132543" observedRunningTime="2026-01-22 13:59:07.875869355 +0000 UTC m=+927.286979284" watchObservedRunningTime="2026-01-22 13:59:07.879591032 +0000 UTC m=+927.290700961" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.965650 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" podStartSLOduration=7.533872041 podStartE2EDuration="29.965631767s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.606098844 +0000 UTC m=+900.017208773" lastFinishedPulling="2026-01-22 13:59:03.03785857 +0000 UTC m=+922.448968499" observedRunningTime="2026-01-22 13:59:07.964872357 +0000 UTC m=+927.375982296" watchObservedRunningTime="2026-01-22 13:59:07.965631767 +0000 UTC m=+927.376741696" Jan 22 13:59:07 crc kubenswrapper[4769]: I0122 13:59:07.967187 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" podStartSLOduration=7.090781224 podStartE2EDuration="29.967178948s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.161459416 +0000 UTC m=+899.572569345" lastFinishedPulling="2026-01-22 13:59:03.03785714 +0000 UTC m=+922.448967069" observedRunningTime="2026-01-22 13:59:07.931378019 +0000 UTC m=+927.342487978" watchObservedRunningTime="2026-01-22 13:59:07.967178948 +0000 UTC m=+927.378288897" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.875945 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" event={"ID":"bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049","Type":"ContainerStarted","Data":"29cb0068743d3e2ec1ba622ac6694b5c995ea608c7b9a9bc35fa9f03a07b266d"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.876012 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.878870 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" event={"ID":"f13c0d19-4c14-4897-bbc5-5c220d207e41","Type":"ContainerStarted","Data":"71ad5f08943929d364c3557c81b7f32f75166746528ec9d87f97c8d6e587c9d9"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.879056 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.881054 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" event={"ID":"14005034-1ce8-4d62-afbc-66cd1d0d9be1","Type":"ContainerStarted","Data":"eda1a43523bb7d2a34ca9fd4426880d617840cc51357f657f90c8add1f4fb7b2"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.892959 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" event={"ID":"11299941-70c0-41a8-ad9c-5c4648c3aa95","Type":"ContainerStarted","Data":"ad2f145ab6dc28c07b31645d823a995628fed4f7b6114497dcd9ca97ae3728bc"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.893148 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.896175 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" event={"ID":"7d908338-dcdc-4423-b719-02d30f3834ed","Type":"ContainerStarted","Data":"bca7f6294445bc9a0d140e2f39f10fb05c60d067a781dd29b6e4a4c1638298ae"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.896256 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.897932 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" podStartSLOduration=7.570401095 podStartE2EDuration="30.897920247s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.71041375 +0000 UTC m=+899.121523679" lastFinishedPulling="2026-01-22 13:59:03.037932902 +0000 UTC m=+922.449042831" observedRunningTime="2026-01-22 13:59:08.896366776 +0000 UTC m=+928.307476715" watchObservedRunningTime="2026-01-22 13:59:08.897920247 +0000 UTC m=+928.309030176" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.898770 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" event={"ID":"8217a619-751c-4d07-a96c-ce3208f08e84","Type":"ContainerStarted","Data":"25be5054df9f1b99c2fb0aef13520fcde4eabe101c359d90267fdf8a547f1cfd"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.899492 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.900890 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" event={"ID":"141f0476-23eb-4a43-a4ac-4d33c12bfb5b","Type":"ContainerStarted","Data":"5918743ed5b448c2a8f37e9bc67f1fded7d5f4c1000b1596a0f23dea4d83035b"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.901279 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.902905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" event={"ID":"80a16478-da8a-4d2f-89df-163fada49abe","Type":"ContainerStarted","Data":"2de4c10f55c8e21ae16eae53c51b1df9c1e5401445367aa40dd68be1ad708e72"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.903237 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.905016 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" event={"ID":"31021ae3-dbb7-4ceb-8737-31052d849f0a","Type":"ContainerStarted","Data":"d20d82b0dc1aec4cf3c84014da525ae4fb07ab88e03bd7cebbeb7b830cdfa553"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.905308 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.906967 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" event={"ID":"d931ff7f-f554-4249-bc34-2cd09fc97427","Type":"ContainerStarted","Data":"e4b9e080024c42102937a028460c06374487901e7f2a970d08b8687992c15919"} Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.907394 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.924466 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hv48h" podStartSLOduration=3.03295855 podStartE2EDuration="29.924443701s" podCreationTimestamp="2026-01-22 13:58:39 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.61938328 +0000 UTC m=+900.030493209" lastFinishedPulling="2026-01-22 13:59:07.510868431 +0000 UTC m=+926.921978360" observedRunningTime="2026-01-22 13:59:08.920897259 +0000 UTC m=+928.332007198" watchObservedRunningTime="2026-01-22 13:59:08.924443701 +0000 UTC m=+928.335553630" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.947871 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" podStartSLOduration=4.106021145 podStartE2EDuration="30.947842965s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.621505085 +0000 UTC m=+900.032615014" lastFinishedPulling="2026-01-22 13:59:07.463326915 +0000 UTC m=+926.874436834" observedRunningTime="2026-01-22 13:59:08.942411572 +0000 UTC m=+928.353521511" watchObservedRunningTime="2026-01-22 13:59:08.947842965 +0000 UTC m=+928.358952894" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.969100 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" podStartSLOduration=7.956433789 podStartE2EDuration="30.969080681s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.025348531 +0000 UTC m=+899.436458460" lastFinishedPulling="2026-01-22 13:59:03.037995423 +0000 UTC m=+922.449105352" observedRunningTime="2026-01-22 13:59:08.967618463 +0000 UTC m=+928.378728402" watchObservedRunningTime="2026-01-22 13:59:08.969080681 +0000 UTC m=+928.380190620" Jan 22 13:59:08 crc kubenswrapper[4769]: I0122 13:59:08.984456 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" podStartSLOduration=4.268121433 podStartE2EDuration="30.984438863s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.619518194 +0000 UTC m=+900.030628123" lastFinishedPulling="2026-01-22 13:59:07.335835624 +0000 UTC m=+926.746945553" observedRunningTime="2026-01-22 13:59:08.983276123 +0000 UTC m=+928.394386062" watchObservedRunningTime="2026-01-22 13:59:08.984438863 +0000 UTC m=+928.395548792" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.003844 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" podStartSLOduration=3.306402186 podStartE2EDuration="30.003821881s" podCreationTimestamp="2026-01-22 13:58:39 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.614928973 +0000 UTC m=+900.026038912" lastFinishedPulling="2026-01-22 13:59:07.312348678 +0000 UTC m=+926.723458607" observedRunningTime="2026-01-22 13:59:08.997225798 +0000 UTC m=+928.408335727" watchObservedRunningTime="2026-01-22 13:59:09.003821881 +0000 UTC m=+928.414931810" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.021199 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" podStartSLOduration=4.174284416 podStartE2EDuration="31.021178227s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.616503595 +0000 UTC m=+900.027613524" lastFinishedPulling="2026-01-22 13:59:07.463397406 +0000 UTC m=+926.874507335" observedRunningTime="2026-01-22 13:59:09.018108366 +0000 UTC m=+928.429218285" watchObservedRunningTime="2026-01-22 13:59:09.021178227 +0000 UTC m=+928.432288156" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.058651 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" podStartSLOduration=4.21345964 podStartE2EDuration="31.058634218s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.618980759 +0000 UTC m=+900.030090688" lastFinishedPulling="2026-01-22 13:59:07.464155337 +0000 UTC m=+926.875265266" observedRunningTime="2026-01-22 13:59:09.055890706 +0000 UTC m=+928.467000635" watchObservedRunningTime="2026-01-22 13:59:09.058634218 +0000 UTC m=+928.469744147" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.093502 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" podStartSLOduration=7.776490474 podStartE2EDuration="31.093486731s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.720915554 +0000 UTC m=+899.132025473" lastFinishedPulling="2026-01-22 13:59:03.037911801 +0000 UTC m=+922.449021730" observedRunningTime="2026-01-22 13:59:09.08925758 +0000 UTC m=+928.500367509" watchObservedRunningTime="2026-01-22 13:59:09.093486731 +0000 UTC m=+928.504596660" Jan 22 13:59:09 crc kubenswrapper[4769]: I0122 13:59:09.112061 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" podStartSLOduration=3.9988791089999998 podStartE2EDuration="31.112043977s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.551958754 +0000 UTC m=+899.963068683" lastFinishedPulling="2026-01-22 13:59:07.665123622 +0000 UTC m=+927.076233551" observedRunningTime="2026-01-22 13:59:09.104344486 +0000 UTC m=+928.515454405" watchObservedRunningTime="2026-01-22 13:59:09.112043977 +0000 UTC m=+928.523153906" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.482172 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.482513 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.903857 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.920523 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2b0a07de-4458-4970-a304-a608625bdebf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8542tcht\" (UID: \"2b0a07de-4458-4970-a304-a608625bdebf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.924230 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" event={"ID":"13c33fdb-b388-4fdf-996c-544286f47a73","Type":"ContainerStarted","Data":"7bc9efabe45c34437909b125f12d6fc6ec395ccc5f1264594b0ca1c7198350b2"} Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.924387 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.926662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" event={"ID":"3d8a97d6-e3bd-49e0-bc78-024286cce303","Type":"ContainerStarted","Data":"681d24f063b3e61adc895b535f0dcc78df7f1de119487182b35fd46bb0132143"} Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.927091 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:59:10 crc kubenswrapper[4769]: I0122 13:59:10.951574 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" podStartSLOduration=30.164625941 podStartE2EDuration="32.95155052s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:59:07.766056637 +0000 UTC m=+927.177166566" lastFinishedPulling="2026-01-22 13:59:10.552981216 +0000 UTC m=+929.964091145" observedRunningTime="2026-01-22 13:59:10.945649195 +0000 UTC m=+930.356759134" watchObservedRunningTime="2026-01-22 13:59:10.95155052 +0000 UTC m=+930.362660449" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.021750 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-sn876" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.030923 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.323169 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.331293 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a2bbc43c-9feb-4287-9e35-6f100c6644f6-webhook-certs\") pod \"openstack-operator-controller-manager-54d678f547-4dd5j\" (UID: \"a2bbc43c-9feb-4287-9e35-6f100c6644f6\") " pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.440684 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" podStartSLOduration=3.410656971 podStartE2EDuration="33.440666377s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.520044863 +0000 UTC m=+899.931154782" lastFinishedPulling="2026-01-22 13:59:10.550054259 +0000 UTC m=+929.961164188" observedRunningTime="2026-01-22 13:59:10.967505578 +0000 UTC m=+930.378615517" watchObservedRunningTime="2026-01-22 13:59:11.440666377 +0000 UTC m=+930.851776296" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.444390 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht"] Jan 22 13:59:11 crc kubenswrapper[4769]: W0122 13:59:11.449388 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b0a07de_4458_4970_a304_a608625bdebf.slice/crio-6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6 WatchSource:0}: Error finding container 6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6: Status 404 returned error can't find the container with id 6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6 Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.563004 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hlb79" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.571868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.934680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" event={"ID":"d40b03ae-0991-4364-85f3-89cf5e8d5686","Type":"ContainerStarted","Data":"5c7e365f66b93d50321f79dcfec06dc0b8ff2c5b45694d6f9f9d52cbb2246ead"} Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.935330 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.937179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" event={"ID":"2b0a07de-4458-4970-a304-a608625bdebf","Type":"ContainerStarted","Data":"6c279ea742bba02b302611eb33d71746ea1fafa31ba3735980cc3f1d33f87ad6"} Jan 22 13:59:11 crc kubenswrapper[4769]: I0122 13:59:11.953508 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" podStartSLOduration=2.789367998 podStartE2EDuration="33.953490235s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.10708059 +0000 UTC m=+899.518190519" lastFinishedPulling="2026-01-22 13:59:11.271202827 +0000 UTC m=+930.682312756" observedRunningTime="2026-01-22 13:59:11.951931114 +0000 UTC m=+931.363041043" watchObservedRunningTime="2026-01-22 13:59:11.953490235 +0000 UTC m=+931.364600154" Jan 22 13:59:12 crc kubenswrapper[4769]: I0122 13:59:12.010896 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j"] Jan 22 13:59:12 crc kubenswrapper[4769]: I0122 13:59:12.943821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" event={"ID":"a2bbc43c-9feb-4287-9e35-6f100c6644f6","Type":"ContainerStarted","Data":"f4e37806e6527062db89529eef98d005defcffc5552dda969c9d0b0ed2d49f3d"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.952174 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" event={"ID":"a2bbc43c-9feb-4287-9e35-6f100c6644f6","Type":"ContainerStarted","Data":"66b32dd0d9268ff4a1b61e4321a3d9e00c1ab00f45e00aad22cb81d48102627b"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.953545 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.959395 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" event={"ID":"ebd5834b-ef11-40bb-9d15-6878767e7bef","Type":"ContainerStarted","Data":"490eeea26278e03b32ca9f561648ce2054d428fd80235000f234383ad8c07695"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.959640 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.960982 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" event={"ID":"c6b325d8-50c6-411a-bc7f-938b284f0efb","Type":"ContainerStarted","Data":"42c9aff5afd5ce55f8aec69b06fac67459da53bfa3c6146529cc21fbf0d8bc1d"} Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.961176 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.979186 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" podStartSLOduration=34.979169416 podStartE2EDuration="34.979169416s" podCreationTimestamp="2026-01-22 13:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 13:59:13.973769084 +0000 UTC m=+933.384879043" watchObservedRunningTime="2026-01-22 13:59:13.979169416 +0000 UTC m=+933.390279345" Jan 22 13:59:13 crc kubenswrapper[4769]: I0122 13:59:13.993129 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" podStartSLOduration=2.629379915 podStartE2EDuration="35.993112571s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.107420148 +0000 UTC m=+899.518530077" lastFinishedPulling="2026-01-22 13:59:13.471152804 +0000 UTC m=+932.882262733" observedRunningTime="2026-01-22 13:59:13.992187307 +0000 UTC m=+933.403297256" watchObservedRunningTime="2026-01-22 13:59:13.993112571 +0000 UTC m=+933.404222500" Jan 22 13:59:14 crc kubenswrapper[4769]: I0122 13:59:14.007241 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" podStartSLOduration=2.34845952 podStartE2EDuration="36.007226401s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:39.779464269 +0000 UTC m=+899.190574198" lastFinishedPulling="2026-01-22 13:59:13.43823115 +0000 UTC m=+932.849341079" observedRunningTime="2026-01-22 13:59:14.005730062 +0000 UTC m=+933.416839991" watchObservedRunningTime="2026-01-22 13:59:14.007226401 +0000 UTC m=+933.418336330" Jan 22 13:59:14 crc kubenswrapper[4769]: I0122 13:59:14.968229 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" event={"ID":"2b0a07de-4458-4970-a304-a608625bdebf","Type":"ContainerStarted","Data":"c66f2eec601af87c23748c91b258843ae01fb9d65a536001625263bef5a7a158"} Jan 22 13:59:14 crc kubenswrapper[4769]: I0122 13:59:14.998289 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" podStartSLOduration=33.862026637 podStartE2EDuration="36.998263239s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:59:11.451448659 +0000 UTC m=+930.862558588" lastFinishedPulling="2026-01-22 13:59:14.587685261 +0000 UTC m=+933.998795190" observedRunningTime="2026-01-22 13:59:14.990329352 +0000 UTC m=+934.401439321" watchObservedRunningTime="2026-01-22 13:59:14.998263239 +0000 UTC m=+934.409373208" Jan 22 13:59:15 crc kubenswrapper[4769]: I0122 13:59:15.975225 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:16 crc kubenswrapper[4769]: I0122 13:59:16.981589 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" event={"ID":"d8d08194-af60-4614-b425-1b45340cd73b","Type":"ContainerStarted","Data":"ef70237bd566ba26725c3391c44cdb17bffd3c1620a42bb5531d8b8c7f1b88af"} Jan 22 13:59:16 crc kubenswrapper[4769]: I0122 13:59:16.982620 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.857552 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-54q5q" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.877104 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-2q2v2" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.880676 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" podStartSLOduration=5.076775172 podStartE2EDuration="40.880658803s" podCreationTimestamp="2026-01-22 13:58:38 +0000 UTC" firstStartedPulling="2026-01-22 13:58:40.49690269 +0000 UTC m=+899.908012619" lastFinishedPulling="2026-01-22 13:59:16.300786321 +0000 UTC m=+935.711896250" observedRunningTime="2026-01-22 13:59:16.997439396 +0000 UTC m=+936.408549335" watchObservedRunningTime="2026-01-22 13:59:18.880658803 +0000 UTC m=+938.291768732" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.894114 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-rlcb9" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.909408 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-wvxp8" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.928310 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-brq9d" Jan 22 13:59:18 crc kubenswrapper[4769]: I0122 13:59:18.950478 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-8rxgq" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.065377 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-5njtw" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.158959 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-w77v6" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.159414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-ttb7f" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.177976 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-x8dvt" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.193522 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-mwhh9" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.213020 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-fzz6p" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.241285 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-ctf5z" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.255902 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-prfwv" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.280709 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-jbtsm" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.391434 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-gwzt2" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.507609 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" Jan 22 13:59:19 crc kubenswrapper[4769]: I0122 13:59:19.832997 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-b2w8p" Jan 22 13:59:21 crc kubenswrapper[4769]: I0122 13:59:21.038320 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8542tcht" Jan 22 13:59:21 crc kubenswrapper[4769]: I0122 13:59:21.580857 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54d678f547-4dd5j" Jan 22 13:59:24 crc kubenswrapper[4769]: I0122 13:59:24.922756 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-zt4sd" Jan 22 13:59:29 crc kubenswrapper[4769]: I0122 13:59:29.140234 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-f2klg" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.482646 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.483248 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.483300 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.484001 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 13:59:40 crc kubenswrapper[4769]: I0122 13:59:40.484067 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e" gracePeriod=600 Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181021 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e" exitCode=0 Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e"} Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181487 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa"} Jan 22 13:59:45 crc kubenswrapper[4769]: I0122 13:59:45.181507 4769 scope.go:117] "RemoveContainer" containerID="3179ab0de90548977badcb720a49e9de55c423265ce63debd6542edff4ab9f17" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.196700 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:47 crc kubenswrapper[4769]: E0122 13:59:47.202869 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.202919 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:59:47 crc kubenswrapper[4769]: E0122 13:59:47.202931 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-utilities" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.202940 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-utilities" Jan 22 13:59:47 crc kubenswrapper[4769]: E0122 13:59:47.202962 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-content" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.202970 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="extract-content" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.203118 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf4cf7c-e696-4123-af54-e8f96242dea3" containerName="registry-server" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.204022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.206872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207101 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207116 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207201 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qpvwm" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.207101 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.223635 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.223724 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.261930 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.263151 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.265177 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.276364 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324476 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324521 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324566 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324590 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.324613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.325429 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.341341 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"dnsmasq-dns-675f4bcbfc-hwccv\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.425872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.425937 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.425990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.426724 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.426878 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.446708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"dnsmasq-dns-78dd6ddcc-8mfxs\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.527800 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 13:59:47 crc kubenswrapper[4769]: I0122 13:59:47.581839 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.024686 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:48 crc kubenswrapper[4769]: W0122 13:59:48.032097 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31fc43cb_0b18_49b4_a19b_6047e962f742.slice/crio-4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9 WatchSource:0}: Error finding container 4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9: Status 404 returned error can't find the container with id 4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9 Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.032345 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.035011 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 13:59:48 crc kubenswrapper[4769]: W0122 13:59:48.040520 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ba28aa8_af6e_4b05_b308_1a5d989da923.slice/crio-03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269 WatchSource:0}: Error finding container 03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269: Status 404 returned error can't find the container with id 03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269 Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.210492 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" event={"ID":"8ba28aa8-af6e-4b05-b308-1a5d989da923","Type":"ContainerStarted","Data":"03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269"} Jan 22 13:59:48 crc kubenswrapper[4769]: I0122 13:59:48.213287 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" event={"ID":"31fc43cb-0b18-49b4-a19b-6047e962f742","Type":"ContainerStarted","Data":"4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9"} Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.160944 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.181876 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.183108 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.194876 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.384989 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.385054 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.385114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.473120 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.486564 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.486618 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.486655 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.487460 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.487461 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.501139 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.502459 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.517612 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.520886 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"dnsmasq-dns-666b6646f7-4c5lx\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.587472 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.587867 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.587892 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.688541 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.688621 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.688656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.689769 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.689849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.708307 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"dnsmasq-dns-57d769cc4f-qvqgs\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.808077 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 13:59:50 crc kubenswrapper[4769]: I0122 13:59:50.850469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.298299 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 13:59:51 crc kubenswrapper[4769]: W0122 13:59:51.308773 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb51a7d68_4414_4157_ab31_b5ee67a26b87.slice/crio-cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f WatchSource:0}: Error finding container cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f: Status 404 returned error can't find the container with id cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.319976 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.321119 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.323560 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.323948 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324044 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324168 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324259 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324369 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.324544 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zm2vm" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.342353 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: W0122 13:59:51.367876 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e6c47fe_34e3_498e_a488_96efc7e689b0.slice/crio-8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb WatchSource:0}: Error finding container 8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb: Status 404 returned error can't find the container with id 8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.373619 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501900 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501931 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501955 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.501981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502065 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502086 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502115 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502138 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.502185 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604052 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604136 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604159 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604211 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604289 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604309 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604359 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604383 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.604703 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.605108 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.605863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.605910 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.608581 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.608812 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.611141 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.611531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.612895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.614891 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.628776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.638297 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.639907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.643940 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5c97b" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.644519 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.645070 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.645937 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.649878 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.649950 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.650025 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.657570 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.658467 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.680750 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.806744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.806838 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807057 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807096 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807123 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807176 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807205 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807236 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807295 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.807317 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.908929 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909308 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909343 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909427 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909446 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.909556 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910010 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910390 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910045 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910806 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910853 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.910882 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.911290 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.912746 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.917849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.918158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.918215 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.923384 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.924479 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.927895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:51 crc kubenswrapper[4769]: I0122 13:59:51.945519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.064342 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.209584 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.247889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerStarted","Data":"8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb"} Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.249294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerStarted","Data":"cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f"} Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.805104 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.807656 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.810590 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-txspp" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.810841 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.811006 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.813055 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.816997 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.823122 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935161 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935207 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtxbg\" (UniqueName: \"kubernetes.io/projected/d5478968-e798-44de-b3ed-632864fc0607-kube-api-access-dtxbg\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935276 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935388 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5478968-e798-44de-b3ed-632864fc0607-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:52 crc kubenswrapper[4769]: I0122 13:59:52.935428 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037282 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037343 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037377 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtxbg\" (UniqueName: \"kubernetes.io/projected/d5478968-e798-44de-b3ed-632864fc0607-kube-api-access-dtxbg\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037436 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037502 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037525 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037551 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5478968-e798-44de-b3ed-632864fc0607-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.037972 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.041220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.042806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.043234 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5478968-e798-44de-b3ed-632864fc0607-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.043616 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5478968-e798-44de-b3ed-632864fc0607-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.050025 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.051560 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5478968-e798-44de-b3ed-632864fc0607-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.060111 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtxbg\" (UniqueName: \"kubernetes.io/projected/d5478968-e798-44de-b3ed-632864fc0607-kube-api-access-dtxbg\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.062054 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"d5478968-e798-44de-b3ed-632864fc0607\") " pod="openstack/openstack-galera-0" Jan 22 13:59:53 crc kubenswrapper[4769]: I0122 13:59:53.169854 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.196374 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.198282 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.202285 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.203446 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-nztd5" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.203629 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.203806 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.209737 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354425 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354478 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354527 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354652 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354683 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354721 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354828 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25rxt\" (UniqueName: \"kubernetes.io/projected/048fbe43-0fef-46e8-bc9d-038c96a4696c-kube-api-access-25rxt\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.354942 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456780 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25rxt\" (UniqueName: \"kubernetes.io/projected/048fbe43-0fef-46e8-bc9d-038c96a4696c-kube-api-access-25rxt\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456884 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456915 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456935 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.456971 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457070 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457126 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457259 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.457516 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.458052 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.458631 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.459134 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/048fbe43-0fef-46e8-bc9d-038c96a4696c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.462929 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.481781 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.482764 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.486369 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/048fbe43-0fef-46e8-bc9d-038c96a4696c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.491757 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.492021 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-hjfvp" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.492154 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.492809 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25rxt\" (UniqueName: \"kubernetes.io/projected/048fbe43-0fef-46e8-bc9d-038c96a4696c-kube-api-access-25rxt\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.509266 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"048fbe43-0fef-46e8-bc9d-038c96a4696c\") " pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.510501 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.529901 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557862 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kolla-config\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557905 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwfp\" (UniqueName: \"kubernetes.io/projected/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kube-api-access-hzwfp\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557956 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.557993 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.558024 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-config-data\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659203 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kolla-config\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659258 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwfp\" (UniqueName: \"kubernetes.io/projected/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kube-api-access-hzwfp\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659311 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.659374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-config-data\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.660244 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-config-data\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.660273 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kolla-config\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.668559 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.673414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.691207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwfp\" (UniqueName: \"kubernetes.io/projected/3aa5525a-0eb2-487f-8721-3ef58f5df4aa-kube-api-access-hzwfp\") pod \"memcached-0\" (UID: \"3aa5525a-0eb2-487f-8721-3ef58f5df4aa\") " pod="openstack/memcached-0" Jan 22 13:59:54 crc kubenswrapper[4769]: I0122 13:59:54.861968 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.345521 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerStarted","Data":"6d72a769611a46bdb1768f4e9380f28bb2a07dc2061ec5bd95716855943febe1"} Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.735271 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.736167 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.743194 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-x6wmz" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.749454 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.896855 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"kube-state-metrics-0\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " pod="openstack/kube-state-metrics-0" Jan 22 13:59:56 crc kubenswrapper[4769]: I0122 13:59:56.997918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"kube-state-metrics-0\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " pod="openstack/kube-state-metrics-0" Jan 22 13:59:57 crc kubenswrapper[4769]: I0122 13:59:57.017132 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"kube-state-metrics-0\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " pod="openstack/kube-state-metrics-0" Jan 22 13:59:57 crc kubenswrapper[4769]: I0122 13:59:57.062307 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.867178 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-57w6l"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.869380 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.873386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9hrbg" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.873467 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.875527 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ljbrk"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.876465 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.879292 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.892050 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-57w6l"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.900920 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk"] Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnf2x\" (UniqueName: \"kubernetes.io/projected/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-kube-api-access-xnf2x\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944672 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-run\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-lib\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-log-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944860 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.944890 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8spp\" (UniqueName: \"kubernetes.io/projected/2f6b8be2-7370-47ca-843b-1dea67d837c3-kube-api-access-q8spp\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-combined-ca-bundle\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-scripts\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945434 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945475 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-etc-ovs\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945496 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-ovn-controller-tls-certs\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945513 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f6b8be2-7370-47ca-843b-1dea67d837c3-scripts\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 13:59:59 crc kubenswrapper[4769]: I0122 13:59:59.945557 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-log\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-etc-ovs\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046507 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-ovn-controller-tls-certs\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f6b8be2-7370-47ca-843b-1dea67d837c3-scripts\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046568 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-log\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046595 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnf2x\" (UniqueName: \"kubernetes.io/projected/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-kube-api-access-xnf2x\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046614 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-run\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046629 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-lib\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-log-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046722 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8spp\" (UniqueName: \"kubernetes.io/projected/2f6b8be2-7370-47ca-843b-1dea67d837c3-kube-api-access-q8spp\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-combined-ca-bundle\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046836 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-scripts\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046874 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.046982 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-etc-ovs\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047221 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047246 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-log-ovn\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-run\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047344 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-var-run\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047444 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-lib\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.047518 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2f6b8be2-7370-47ca-843b-1dea67d837c3-var-log\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.049010 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f6b8be2-7370-47ca-843b-1dea67d837c3-scripts\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.051783 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-scripts\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.054195 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-ovn-controller-tls-certs\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.058336 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-combined-ca-bundle\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.072519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8spp\" (UniqueName: \"kubernetes.io/projected/2f6b8be2-7370-47ca-843b-1dea67d837c3-kube-api-access-q8spp\") pod \"ovn-controller-ovs-57w6l\" (UID: \"2f6b8be2-7370-47ca-843b-1dea67d837c3\") " pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.078685 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnf2x\" (UniqueName: \"kubernetes.io/projected/db7ce269-d7ec-4db1-aab3-b22da5d56c6e-kube-api-access-xnf2x\") pod \"ovn-controller-ljbrk\" (UID: \"db7ce269-d7ec-4db1-aab3-b22da5d56c6e\") " pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.156450 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.157601 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.160373 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.161258 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.165513 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.206306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.216501 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.252359 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.252479 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.252520 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.352319 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353739 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353766 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353807 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.353850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.354839 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.357554 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.357819 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.358119 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-66v6p" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.359405 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.359459 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.360109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.370131 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.371635 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"collect-profiles-29484840-2ln64\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455195 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455289 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455347 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455371 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsdl5\" (UniqueName: \"kubernetes.io/projected/760402cd-68ff-4d2e-a1ba-c54132e75c13-kube-api-access-zsdl5\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455660 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-config\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.455745 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.489718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557087 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557471 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557609 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557633 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558263 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsdl5\" (UniqueName: \"kubernetes.io/projected/760402cd-68ff-4d2e-a1ba-c54132e75c13-kube-api-access-zsdl5\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558392 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-config\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558754 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.558759 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.557833 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.559250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/760402cd-68ff-4d2e-a1ba-c54132e75c13-config\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.562371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.564060 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.579991 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/760402cd-68ff-4d2e-a1ba-c54132e75c13-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.580056 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsdl5\" (UniqueName: \"kubernetes.io/projected/760402cd-68ff-4d2e-a1ba-c54132e75c13-kube-api-access-zsdl5\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.582459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"760402cd-68ff-4d2e-a1ba-c54132e75c13\") " pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:00 crc kubenswrapper[4769]: I0122 14:00:00.714582 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.963615 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.967242 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.971118 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.971144 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.971303 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tgkpr" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.976591 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 22 14:00:03 crc kubenswrapper[4769]: I0122 14:00:03.978044 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109509 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109569 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl7c\" (UniqueName: \"kubernetes.io/projected/1a4e51d1-8dea-4f12-b7e9-7888f5672711-kube-api-access-kkl7c\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109842 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109886 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.109927 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.211892 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212315 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212416 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkl7c\" (UniqueName: \"kubernetes.io/projected/1a4e51d1-8dea-4f12-b7e9-7888f5672711-kube-api-access-kkl7c\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212664 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.212944 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.213460 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.225713 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.230183 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-config\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.235908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.235908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.236158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1a4e51d1-8dea-4f12-b7e9-7888f5672711-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.236236 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a4e51d1-8dea-4f12-b7e9-7888f5672711-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.239954 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.247809 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkl7c\" (UniqueName: \"kubernetes.io/projected/1a4e51d1-8dea-4f12-b7e9-7888f5672711-kube-api-access-kkl7c\") pod \"ovsdbserver-sb-0\" (UID: \"1a4e51d1-8dea-4f12-b7e9-7888f5672711\") " pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: I0122 14:00:04.295757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.804248 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.804409 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9h4dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-hwccv_openstack(31fc43cb-0b18-49b4-a19b-6047e962f742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.806140 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" podUID="31fc43cb-0b18-49b4-a19b-6047e962f742" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.843656 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.843867 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr8r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-8mfxs_openstack(8ba28aa8-af6e-4b05-b308-1a5d989da923): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:00:04 crc kubenswrapper[4769]: E0122 14:00:04.845123 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" podUID="8ba28aa8-af6e-4b05-b308-1a5d989da923" Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.306311 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: W0122 14:00:05.332318 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod048fbe43_0fef_46e8_bc9d_038c96a4696c.slice/crio-799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b WatchSource:0}: Error finding container 799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b: Status 404 returned error can't find the container with id 799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.332362 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.406852 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.417239 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.418050 4769 generic.go:334] "Generic (PLEG): container finished" podID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" exitCode=0 Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.418097 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerDied","Data":"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989"} Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.421608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerStarted","Data":"ccc004cd79462493e89b2cd51c3ab3ddf01650baa9a183653d7b3f8461132890"} Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.422912 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerStarted","Data":"799a7dd33ec4a813965958d3822b3ed52b98cd160ad30b3ce66e5d64579eaa3b"} Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.446945 4769 generic.go:334] "Generic (PLEG): container finished" podID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" exitCode=0 Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.447695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerDied","Data":"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895"} Jan 22 14:00:05 crc kubenswrapper[4769]: W0122 14:00:05.463257 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa5525a_0eb2_487f_8721_3ef58f5df4aa.slice/crio-29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb WatchSource:0}: Error finding container 29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb: Status 404 returned error can't find the container with id 29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb Jan 22 14:00:05 crc kubenswrapper[4769]: W0122 14:00:05.463566 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5478968_e798_44de_b3ed_632864fc0607.slice/crio-77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a WatchSource:0}: Error finding container 77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a: Status 404 returned error can't find the container with id 77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.630372 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.726873 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.741533 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.849812 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.894336 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 22 14:00:05 crc kubenswrapper[4769]: I0122 14:00:05.992171 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-57w6l"] Jan 22 14:00:06 crc kubenswrapper[4769]: I0122 14:00:06.455116 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerStarted","Data":"77ad7917c701542f52eaa6c296d3d5c705a1b360fd970f58a554a0c63596423a"} Jan 22 14:00:06 crc kubenswrapper[4769]: I0122 14:00:06.456644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3aa5525a-0eb2-487f-8721-3ef58f5df4aa","Type":"ContainerStarted","Data":"29899af631e597d454d94c2e237fd5da716c05b941de37c5f4c8b59774f7befb"} Jan 22 14:00:06 crc kubenswrapper[4769]: W0122 14:00:06.863659 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f6b8be2_7370_47ca_843b_1dea67d837c3.slice/crio-69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d WatchSource:0}: Error finding container 69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d: Status 404 returned error can't find the container with id 69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d Jan 22 14:00:06 crc kubenswrapper[4769]: W0122 14:00:06.886133 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb7ce269_d7ec_4db1_aab3_b22da5d56c6e.slice/crio-e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba WatchSource:0}: Error finding container e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba: Status 404 returned error can't find the container with id e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.076322 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.158926 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209255 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") pod \"8ba28aa8-af6e-4b05-b308-1a5d989da923\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209343 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") pod \"8ba28aa8-af6e-4b05-b308-1a5d989da923\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209412 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") pod \"8ba28aa8-af6e-4b05-b308-1a5d989da923\" (UID: \"8ba28aa8-af6e-4b05-b308-1a5d989da923\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.209839 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config" (OuterVolumeSpecName: "config") pod "8ba28aa8-af6e-4b05-b308-1a5d989da923" (UID: "8ba28aa8-af6e-4b05-b308-1a5d989da923"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.210131 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ba28aa8-af6e-4b05-b308-1a5d989da923" (UID: "8ba28aa8-af6e-4b05-b308-1a5d989da923"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.210573 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.210588 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ba28aa8-af6e-4b05-b308-1a5d989da923-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.309223 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7" (OuterVolumeSpecName: "kube-api-access-tr8r7") pod "8ba28aa8-af6e-4b05-b308-1a5d989da923" (UID: "8ba28aa8-af6e-4b05-b308-1a5d989da923"). InnerVolumeSpecName "kube-api-access-tr8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.311999 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") pod \"31fc43cb-0b18-49b4-a19b-6047e962f742\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.312062 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") pod \"31fc43cb-0b18-49b4-a19b-6047e962f742\" (UID: \"31fc43cb-0b18-49b4-a19b-6047e962f742\") " Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.312606 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr8r7\" (UniqueName: \"kubernetes.io/projected/8ba28aa8-af6e-4b05-b308-1a5d989da923-kube-api-access-tr8r7\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.312636 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config" (OuterVolumeSpecName: "config") pod "31fc43cb-0b18-49b4-a19b-6047e962f742" (UID: "31fc43cb-0b18-49b4-a19b-6047e962f742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.409117 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg" (OuterVolumeSpecName: "kube-api-access-9h4dg") pod "31fc43cb-0b18-49b4-a19b-6047e962f742" (UID: "31fc43cb-0b18-49b4-a19b-6047e962f742"). InnerVolumeSpecName "kube-api-access-9h4dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.414034 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h4dg\" (UniqueName: \"kubernetes.io/projected/31fc43cb-0b18-49b4-a19b-6047e962f742-kube-api-access-9h4dg\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.414065 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fc43cb-0b18-49b4-a19b-6047e962f742-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.466602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" event={"ID":"b8a0650e-6e96-491e-88df-d228be8155e1","Type":"ContainerStarted","Data":"d9d710928e4433f5dd0e9be2190ede9e3b125f18a2ee1bfedf9c84ebf537f3b3"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.467860 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerStarted","Data":"cb0f27b9c3686fd6437f8bd8519d2239c1ac22e630bed57eba5dc3bb400528c4"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.469452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"760402cd-68ff-4d2e-a1ba-c54132e75c13","Type":"ContainerStarted","Data":"5bdfa8a3a5389929b46e2cca659be0dd29437e092c8f665d2fc10c73fde2ca38"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.470871 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.472224 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-8mfxs" event={"ID":"8ba28aa8-af6e-4b05-b308-1a5d989da923","Type":"ContainerDied","Data":"03a0c26c66ed5a4ae9f84cf892076a64dbf88a82ab566091a04d268bb55d7269"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.475061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" event={"ID":"31fc43cb-0b18-49b4-a19b-6047e962f742","Type":"ContainerDied","Data":"4b0f19965ee16c593d67cb00a080c26ccc988a655a590dee7acff08c668a12d9"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.475154 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-hwccv" Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.480983 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a4e51d1-8dea-4f12-b7e9-7888f5672711","Type":"ContainerStarted","Data":"8e253edc1258b967da233f5f102d23b1d6d8b7632597b5713a378395c8c4aa76"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.483104 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk" event={"ID":"db7ce269-d7ec-4db1-aab3-b22da5d56c6e","Type":"ContainerStarted","Data":"e84fd0a260edc9ab68448161b8b2806b1fa84f91e3e93022f3f6f4d06802a2ba"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.487815 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerStarted","Data":"69ee5901eeefc3366f3ede871af6311168586781f9d819ea65be75a58690b69d"} Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.726833 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.731966 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-hwccv"] Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.746185 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 14:00:07 crc kubenswrapper[4769]: I0122 14:00:07.753385 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-8mfxs"] Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.500271 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerStarted","Data":"cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.503111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerStarted","Data":"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.503192 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.505948 4769 generic.go:334] "Generic (PLEG): container finished" podID="b8a0650e-6e96-491e-88df-d228be8155e1" containerID="b13faa7bdb54d2f31f81f30cd670139cd9b89adfb82f77120bfad2d5527962d2" exitCode=0 Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.506022 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" event={"ID":"b8a0650e-6e96-491e-88df-d228be8155e1","Type":"ContainerDied","Data":"b13faa7bdb54d2f31f81f30cd670139cd9b89adfb82f77120bfad2d5527962d2"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.508186 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerStarted","Data":"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.508335 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.510530 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerStarted","Data":"02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d"} Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.569466 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" podStartSLOduration=4.907553848 podStartE2EDuration="18.569444314s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 13:59:51.369826625 +0000 UTC m=+970.780936554" lastFinishedPulling="2026-01-22 14:00:05.031717081 +0000 UTC m=+984.442827020" observedRunningTime="2026-01-22 14:00:08.55634477 +0000 UTC m=+987.967454699" watchObservedRunningTime="2026-01-22 14:00:08.569444314 +0000 UTC m=+987.980554243" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.582274 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" podStartSLOduration=4.859843879 podStartE2EDuration="18.58225542s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 13:59:51.311351713 +0000 UTC m=+970.722461642" lastFinishedPulling="2026-01-22 14:00:05.033763254 +0000 UTC m=+984.444873183" observedRunningTime="2026-01-22 14:00:08.577655099 +0000 UTC m=+987.988765038" watchObservedRunningTime="2026-01-22 14:00:08.58225542 +0000 UTC m=+987.993365349" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.893505 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fc43cb-0b18-49b4-a19b-6047e962f742" path="/var/lib/kubelet/pods/31fc43cb-0b18-49b4-a19b-6047e962f742/volumes" Jan 22 14:00:08 crc kubenswrapper[4769]: I0122 14:00:08.893986 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba28aa8-af6e-4b05-b308-1a5d989da923" path="/var/lib/kubelet/pods/8ba28aa8-af6e-4b05-b308-1a5d989da923/volumes" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.838013 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.914551 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") pod \"b8a0650e-6e96-491e-88df-d228be8155e1\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.914610 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") pod \"b8a0650e-6e96-491e-88df-d228be8155e1\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.914694 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") pod \"b8a0650e-6e96-491e-88df-d228be8155e1\" (UID: \"b8a0650e-6e96-491e-88df-d228be8155e1\") " Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.915522 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "b8a0650e-6e96-491e-88df-d228be8155e1" (UID: "b8a0650e-6e96-491e-88df-d228be8155e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.920802 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4" (OuterVolumeSpecName: "kube-api-access-57ts4") pod "b8a0650e-6e96-491e-88df-d228be8155e1" (UID: "b8a0650e-6e96-491e-88df-d228be8155e1"). InnerVolumeSpecName "kube-api-access-57ts4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:12 crc kubenswrapper[4769]: I0122 14:00:12.922036 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b8a0650e-6e96-491e-88df-d228be8155e1" (UID: "b8a0650e-6e96-491e-88df-d228be8155e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.016237 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57ts4\" (UniqueName: \"kubernetes.io/projected/b8a0650e-6e96-491e-88df-d228be8155e1-kube-api-access-57ts4\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.016278 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8a0650e-6e96-491e-88df-d228be8155e1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.016288 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8a0650e-6e96-491e-88df-d228be8155e1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.559157 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" event={"ID":"b8a0650e-6e96-491e-88df-d228be8155e1","Type":"ContainerDied","Data":"d9d710928e4433f5dd0e9be2190ede9e3b125f18a2ee1bfedf9c84ebf537f3b3"} Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.559188 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484840-2ln64" Jan 22 14:00:13 crc kubenswrapper[4769]: I0122 14:00:13.559224 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9d710928e4433f5dd0e9be2190ede9e3b125f18a2ee1bfedf9c84ebf537f3b3" Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.568701 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerStarted","Data":"0afd7437e75b49f642960a02d03f03938d716eec8201f40d3ed5c5c261334175"} Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.576221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3aa5525a-0eb2-487f-8721-3ef58f5df4aa","Type":"ContainerStarted","Data":"21548a6c8213d484e0dd4fe09e62fb75dcdebf16d0f5d31b09b1149303916de6"} Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.576374 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 22 14:00:14 crc kubenswrapper[4769]: I0122 14:00:14.619063 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.87032325 podStartE2EDuration="20.619028427s" podCreationTimestamp="2026-01-22 13:59:54 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.466027612 +0000 UTC m=+984.877137541" lastFinishedPulling="2026-01-22 14:00:13.214732789 +0000 UTC m=+992.625842718" observedRunningTime="2026-01-22 14:00:14.614755415 +0000 UTC m=+994.025865344" watchObservedRunningTime="2026-01-22 14:00:14.619028427 +0000 UTC m=+994.030138346" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.590515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk" event={"ID":"db7ce269-d7ec-4db1-aab3-b22da5d56c6e","Type":"ContainerStarted","Data":"ba45903685c9d50a9fa25dd56749b192901d5d4436b77f70c03fd2036ec364d5"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.591858 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.592089 4769 generic.go:334] "Generic (PLEG): container finished" podID="2f6b8be2-7370-47ca-843b-1dea67d837c3" containerID="4d7fdba300b46601763a56f3d07345d0392d08985f1061796bbcbc2dfb3c74f3" exitCode=0 Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.592172 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerDied","Data":"4d7fdba300b46601763a56f3d07345d0392d08985f1061796bbcbc2dfb3c74f3"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.594214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerStarted","Data":"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.595081 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.597518 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"760402cd-68ff-4d2e-a1ba-c54132e75c13","Type":"ContainerStarted","Data":"3959ddc84318de0ab65be59c34120b53236f9e6ac62d7c1f9f0c130530676e02"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.599547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerStarted","Data":"54c0e6317044865508a4ba1510f495e603533d4a18e8d0b35f92da59b89098eb"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.601148 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a4e51d1-8dea-4f12-b7e9-7888f5672711","Type":"ContainerStarted","Data":"121328f30451daebae9d2c6e8c47cd3fc593781f2d600f51a2d4a1bf39d37dfd"} Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.613809 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ljbrk" podStartSLOduration=9.661007793 podStartE2EDuration="16.613777003s" podCreationTimestamp="2026-01-22 13:59:59 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.887675585 +0000 UTC m=+986.298785514" lastFinishedPulling="2026-01-22 14:00:13.840444805 +0000 UTC m=+993.251554724" observedRunningTime="2026-01-22 14:00:15.607483428 +0000 UTC m=+995.018593367" watchObservedRunningTime="2026-01-22 14:00:15.613777003 +0000 UTC m=+995.024886932" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.665713 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.180047359 podStartE2EDuration="19.665694814s" podCreationTimestamp="2026-01-22 13:59:56 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.8775945 +0000 UTC m=+986.288704429" lastFinishedPulling="2026-01-22 14:00:14.363241945 +0000 UTC m=+993.774351884" observedRunningTime="2026-01-22 14:00:15.664831911 +0000 UTC m=+995.075941840" watchObservedRunningTime="2026-01-22 14:00:15.665694814 +0000 UTC m=+995.076804743" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.809947 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.862531 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:15 crc kubenswrapper[4769]: I0122 14:00:15.930520 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.614037 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerStarted","Data":"403630aeb0a046af747092fbec28b3c7a35d4d9a9f94b0b704c9179e90ab6e7d"} Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.614345 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-57w6l" event={"ID":"2f6b8be2-7370-47ca-843b-1dea67d837c3","Type":"ContainerStarted","Data":"8a8365e1cba25eb5c1285c7e161f8031425bf00f5bef1e99a8d7cc080522c76d"} Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.614467 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" containerID="cri-o://a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" gracePeriod=10 Jan 22 14:00:16 crc kubenswrapper[4769]: I0122 14:00:16.648835 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-57w6l" podStartSLOduration=11.144977178 podStartE2EDuration="17.648813044s" podCreationTimestamp="2026-01-22 13:59:59 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.865931105 +0000 UTC m=+986.277041024" lastFinishedPulling="2026-01-22 14:00:13.369766961 +0000 UTC m=+992.780876890" observedRunningTime="2026-01-22 14:00:16.644764768 +0000 UTC m=+996.055874717" watchObservedRunningTime="2026-01-22 14:00:16.648813044 +0000 UTC m=+996.059922973" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.179520 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.321437 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") pod \"0e6c47fe-34e3-498e-a488-96efc7e689b0\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.321524 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") pod \"0e6c47fe-34e3-498e-a488-96efc7e689b0\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.321628 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") pod \"0e6c47fe-34e3-498e-a488-96efc7e689b0\" (UID: \"0e6c47fe-34e3-498e-a488-96efc7e689b0\") " Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.328208 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn" (OuterVolumeSpecName: "kube-api-access-sjjzn") pod "0e6c47fe-34e3-498e-a488-96efc7e689b0" (UID: "0e6c47fe-34e3-498e-a488-96efc7e689b0"). InnerVolumeSpecName "kube-api-access-sjjzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.366977 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e6c47fe-34e3-498e-a488-96efc7e689b0" (UID: "0e6c47fe-34e3-498e-a488-96efc7e689b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.368510 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config" (OuterVolumeSpecName: "config") pod "0e6c47fe-34e3-498e-a488-96efc7e689b0" (UID: "0e6c47fe-34e3-498e-a488-96efc7e689b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.424404 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.424755 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjjzn\" (UniqueName: \"kubernetes.io/projected/0e6c47fe-34e3-498e-a488-96efc7e689b0-kube-api-access-sjjzn\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.424771 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e6c47fe-34e3-498e-a488-96efc7e689b0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623405 4769 generic.go:334] "Generic (PLEG): container finished" podID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" exitCode=0 Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623501 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623521 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerDied","Data":"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54"} Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-4c5lx" event={"ID":"0e6c47fe-34e3-498e-a488-96efc7e689b0","Type":"ContainerDied","Data":"8215e5b4fd26aed68a6a57c59e5f8a125091e3ac96652ebf56614a1931aa9fcb"} Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623566 4769 scope.go:117] "RemoveContainer" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623845 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.623859 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.658402 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 14:00:17 crc kubenswrapper[4769]: I0122 14:00:17.664954 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-4c5lx"] Jan 22 14:00:18 crc kubenswrapper[4769]: I0122 14:00:18.663002 4769 scope.go:117] "RemoveContainer" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" Jan 22 14:00:18 crc kubenswrapper[4769]: I0122 14:00:18.891345 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" path="/var/lib/kubelet/pods/0e6c47fe-34e3-498e-a488-96efc7e689b0/volumes" Jan 22 14:00:19 crc kubenswrapper[4769]: I0122 14:00:19.667225 4769 generic.go:334] "Generic (PLEG): container finished" podID="d5478968-e798-44de-b3ed-632864fc0607" containerID="0afd7437e75b49f642960a02d03f03938d716eec8201f40d3ed5c5c261334175" exitCode=0 Jan 22 14:00:19 crc kubenswrapper[4769]: I0122 14:00:19.667260 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerDied","Data":"0afd7437e75b49f642960a02d03f03938d716eec8201f40d3ed5c5c261334175"} Jan 22 14:00:19 crc kubenswrapper[4769]: I0122 14:00:19.862887 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:20.691095 4769 generic.go:334] "Generic (PLEG): container finished" podID="048fbe43-0fef-46e8-bc9d-038c96a4696c" containerID="54c0e6317044865508a4ba1510f495e603533d4a18e8d0b35f92da59b89098eb" exitCode=0 Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:20.691131 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerDied","Data":"54c0e6317044865508a4ba1510f495e603533d4a18e8d0b35f92da59b89098eb"} Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.583319 4769 scope.go:117] "RemoveContainer" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" Jan 22 14:00:26 crc kubenswrapper[4769]: E0122 14:00:26.584341 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54\": container with ID starting with a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54 not found: ID does not exist" containerID="a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.584390 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54"} err="failed to get container status \"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54\": rpc error: code = NotFound desc = could not find container \"a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54\": container with ID starting with a3c2ec4ffc4b524e59a664b163a1deea35d1d62b8cc245aafee6a6a6f1417f54 not found: ID does not exist" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.584420 4769 scope.go:117] "RemoveContainer" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" Jan 22 14:00:26 crc kubenswrapper[4769]: E0122 14:00:26.584668 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989\": container with ID starting with 84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989 not found: ID does not exist" containerID="84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989" Jan 22 14:00:26 crc kubenswrapper[4769]: I0122 14:00:26.584697 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989"} err="failed to get container status \"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989\": rpc error: code = NotFound desc = could not find container \"84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989\": container with ID starting with 84798df2ae1ab219f2618c1a2106e22205e2ad5f85b084c29279df47b1ca4989 not found: ID does not exist" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.081441 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:27 crc kubenswrapper[4769]: E0122 14:00:27.087165 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="init" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087193 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="init" Jan 22 14:00:27 crc kubenswrapper[4769]: E0122 14:00:27.087222 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087229 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" Jan 22 14:00:27 crc kubenswrapper[4769]: E0122 14:00:27.087246 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a0650e-6e96-491e-88df-d228be8155e1" containerName="collect-profiles" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087253 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a0650e-6e96-491e-88df-d228be8155e1" containerName="collect-profiles" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087447 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a0650e-6e96-491e-88df-d228be8155e1" containerName="collect-profiles" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.087469 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e6c47fe-34e3-498e-a488-96efc7e689b0" containerName="dnsmasq-dns" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.088407 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.098542 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.101461 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.277744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.277986 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.278137 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.380078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.380445 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.380483 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.381806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.382390 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.399173 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"dnsmasq-dns-7cb5889db5-5f5mt\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.410627 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.744964 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"048fbe43-0fef-46e8-bc9d-038c96a4696c","Type":"ContainerStarted","Data":"1d7a9c196c826197a35b4dc8d806edfa528f1331f96842069a37f1b52fa7dc55"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.747041 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1a4e51d1-8dea-4f12-b7e9-7888f5672711","Type":"ContainerStarted","Data":"c75905ece5affa6d47506c893319fe219eb68f7809a10fee02bad716f88a9936"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.748950 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5478968-e798-44de-b3ed-632864fc0607","Type":"ContainerStarted","Data":"330233cec66a5cad330a9043a8a7e1a16cf6c2ea3faaad17a73fbe3e5bcace85"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.750609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"760402cd-68ff-4d2e-a1ba-c54132e75c13","Type":"ContainerStarted","Data":"3650a732270b43b89a87a9e0d4bc365b089e9e8dc1fe46e3ea657c3ab8a54ef6"} Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.774080 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.564157146 podStartE2EDuration="34.774058009s" podCreationTimestamp="2026-01-22 13:59:53 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.354079698 +0000 UTC m=+984.765189617" lastFinishedPulling="2026-01-22 14:00:13.563980541 +0000 UTC m=+992.975090480" observedRunningTime="2026-01-22 14:00:27.77257053 +0000 UTC m=+1007.183680539" watchObservedRunningTime="2026-01-22 14:00:27.774058009 +0000 UTC m=+1007.185167938" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.800290 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.914321322 podStartE2EDuration="36.800268946s" podCreationTimestamp="2026-01-22 13:59:51 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.483835508 +0000 UTC m=+984.894945437" lastFinishedPulling="2026-01-22 14:00:13.369783132 +0000 UTC m=+992.780893061" observedRunningTime="2026-01-22 14:00:27.795917551 +0000 UTC m=+1007.207027500" watchObservedRunningTime="2026-01-22 14:00:27.800268946 +0000 UTC m=+1007.211378875" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.819534 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=8.996894271 podStartE2EDuration="28.81951411s" podCreationTimestamp="2026-01-22 13:59:59 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.881152964 +0000 UTC m=+986.292262893" lastFinishedPulling="2026-01-22 14:00:26.703772803 +0000 UTC m=+1006.114882732" observedRunningTime="2026-01-22 14:00:27.816545692 +0000 UTC m=+1007.227655621" watchObservedRunningTime="2026-01-22 14:00:27.81951411 +0000 UTC m=+1007.230624039" Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.833651 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:27 crc kubenswrapper[4769]: I0122 14:00:27.847825 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.029520856 podStartE2EDuration="25.847805782s" podCreationTimestamp="2026-01-22 14:00:02 +0000 UTC" firstStartedPulling="2026-01-22 14:00:06.868700777 +0000 UTC m=+986.279810716" lastFinishedPulling="2026-01-22 14:00:26.686985713 +0000 UTC m=+1006.098095642" observedRunningTime="2026-01-22 14:00:27.846496308 +0000 UTC m=+1007.257606237" watchObservedRunningTime="2026-01-22 14:00:27.847805782 +0000 UTC m=+1007.258915711" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.263677 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.272655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.275528 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sfs6t" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.275544 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.280900 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.281373 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.282080 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.295993 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.346889 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397357 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-lock\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397748 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce65dba3-22b9-482f-b3da-2f4705468ea4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397784 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-cache\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.397997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrb6m\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-kube-api-access-xrb6m\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrb6m\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-kube-api-access-xrb6m\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499903 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-lock\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499979 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.499999 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce65dba3-22b9-482f-b3da-2f4705468ea4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: E0122 14:00:28.500062 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:28 crc kubenswrapper[4769]: E0122 14:00:28.500081 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500097 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-cache\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: E0122 14:00:28.500127 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:29.000107254 +0000 UTC m=+1008.411217183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500314 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-lock\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.500626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ce65dba3-22b9-482f-b3da-2f4705468ea4-cache\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.506940 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce65dba3-22b9-482f-b3da-2f4705468ea4-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.515671 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrb6m\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-kube-api-access-xrb6m\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.522907 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.756977 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jmhxf"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.758289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.761532 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.761917 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.762207 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.770245 4769 generic.go:334] "Generic (PLEG): container finished" podID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerID="787c971a0dea74b3f6ee351dd1bb60c21eb90e1fc50d951e6c355694f371ee32" exitCode=0 Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.770368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerDied","Data":"787c971a0dea74b3f6ee351dd1bb60c21eb90e1fc50d951e6c355694f371ee32"} Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.770425 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerStarted","Data":"25c320cddf3aa10b554d2c87ef85148faa26e18a085d0ac5f86a88df32d73795"} Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.771164 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.775622 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jmhxf"] Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.831557 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904677 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904746 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904906 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904943 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904970 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:28 crc kubenswrapper[4769]: I0122 14:00:28.904996 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008246 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008390 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008521 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008616 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008661 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008711 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.008753 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.009056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.009541 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: E0122 14:00:29.009666 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:29 crc kubenswrapper[4769]: E0122 14:00:29.009690 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:29 crc kubenswrapper[4769]: E0122 14:00:29.009739 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:30.009714188 +0000 UTC m=+1009.420824117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.009051 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.010841 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.012702 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.020405 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.024722 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.033096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"swift-ring-rebalance-jmhxf\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.094605 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.124958 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.126818 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.130298 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2ndkt"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.131248 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.134650 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.135498 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.191014 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.191280 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2ndkt"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.197447 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212075 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovs-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212180 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovn-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212239 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212311 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-config\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212425 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212634 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhqk\" (UniqueName: \"kubernetes.io/projected/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-kube-api-access-klhqk\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.212680 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-combined-ca-bundle\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314584 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314603 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314632 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-config\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314697 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klhqk\" (UniqueName: \"kubernetes.io/projected/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-kube-api-access-klhqk\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314746 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-combined-ca-bundle\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314768 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovs-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.314824 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovn-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.315255 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovn-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.315871 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-config\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.316210 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.316264 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-ovs-rundir\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.318313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.318535 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.321646 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-combined-ca-bundle\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.325029 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.334512 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klhqk\" (UniqueName: \"kubernetes.io/projected/cbba9b5e-2f1d-4a3a-930e-c835070aefe9-kube-api-access-klhqk\") pod \"ovn-controller-metrics-2ndkt\" (UID: \"cbba9b5e-2f1d-4a3a-930e-c835070aefe9\") " pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.342424 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"dnsmasq-dns-6c89d5d749-5sxsl\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.387353 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.387888 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.421034 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.422405 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.449725 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.468188 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.536390 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2ndkt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.619292 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jmhxf"] Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.627628 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.627940 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.628031 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.628113 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.628248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: W0122 14:00:29.656095 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf13b9a7b_6f5e_48fd_8d95_3beb851e9819.slice/crio-895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0 WatchSource:0}: Error finding container 895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0: Status 404 returned error can't find the container with id 895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0 Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729757 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729929 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.729967 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.730002 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.730892 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.731681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.731709 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.732238 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.758374 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"dnsmasq-dns-698758b865-twczw\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.782420 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.795076 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerStarted","Data":"9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2"} Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.795284 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.796927 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerStarted","Data":"895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0"} Jan 22 14:00:29 crc kubenswrapper[4769]: I0122 14:00:29.815682 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" podStartSLOduration=2.815663267 podStartE2EDuration="2.815663267s" podCreationTimestamp="2026-01-22 14:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:29.81271732 +0000 UTC m=+1009.223827249" watchObservedRunningTime="2026-01-22 14:00:29.815663267 +0000 UTC m=+1009.226773196" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.037251 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:30 crc kubenswrapper[4769]: E0122 14:00:30.037468 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:30 crc kubenswrapper[4769]: E0122 14:00:30.037482 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:30 crc kubenswrapper[4769]: E0122 14:00:30.037524 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:32.03750951 +0000 UTC m=+1011.448619439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.075661 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:30 crc kubenswrapper[4769]: W0122 14:00:30.078444 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd778948b_7654_48d1_8be2_edd924d70ad5.slice/crio-0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158 WatchSource:0}: Error finding container 0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158: Status 404 returned error can't find the container with id 0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.138704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2ndkt"] Jan 22 14:00:30 crc kubenswrapper[4769]: W0122 14:00:30.144764 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbba9b5e_2f1d_4a3a_930e_c835070aefe9.slice/crio-8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa WatchSource:0}: Error finding container 8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa: Status 404 returned error can't find the container with id 8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.351723 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:00:30 crc kubenswrapper[4769]: W0122 14:00:30.371117 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650dfc14_f283_4318_b6bc_4b17cdea15fa.slice/crio-a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225 WatchSource:0}: Error finding container a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225: Status 404 returned error can't find the container with id a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.714925 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.715230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.765349 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.808248 4769 generic.go:334] "Generic (PLEG): container finished" podID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" exitCode=0 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.808325 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerDied","Data":"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.808608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerStarted","Data":"a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.810811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2ndkt" event={"ID":"cbba9b5e-2f1d-4a3a-930e-c835070aefe9","Type":"ContainerStarted","Data":"ae1074e9c91d88a635053fb81b0de6149a7e3bd018551d04f068eba718a3841c"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.810889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2ndkt" event={"ID":"cbba9b5e-2f1d-4a3a-930e-c835070aefe9","Type":"ContainerStarted","Data":"8a4c22b53fd5b3f290d01efbc26af67f088258c28155735c864ea40eea46f9fa"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813283 4769 generic.go:334] "Generic (PLEG): container finished" podID="d778948b-7654-48d1-8be2-edd924d70ad5" containerID="590989faecf49e258b30df1b08b67d281dbed21a6eda2dd9637b8f2c675de2da" exitCode=0 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813411 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" event={"ID":"d778948b-7654-48d1-8be2-edd924d70ad5","Type":"ContainerDied","Data":"590989faecf49e258b30df1b08b67d281dbed21a6eda2dd9637b8f2c675de2da"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813438 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" event={"ID":"d778948b-7654-48d1-8be2-edd924d70ad5","Type":"ContainerStarted","Data":"0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158"} Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.813516 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" containerID="cri-o://9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2" gracePeriod=10 Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.900220 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2ndkt" podStartSLOduration=1.900199897 podStartE2EDuration="1.900199897s" podCreationTimestamp="2026-01-22 14:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:30.884634568 +0000 UTC m=+1010.295744497" watchObservedRunningTime="2026-01-22 14:00:30.900199897 +0000 UTC m=+1010.311309826" Jan 22 14:00:30 crc kubenswrapper[4769]: I0122 14:00:30.940909 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.176759 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.180816 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.184284 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.184902 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jg78z" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.185117 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.185245 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.188414 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.194479 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261223 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-scripts\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261718 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.261993 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.262126 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-config\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.262432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sn9k\" (UniqueName: \"kubernetes.io/projected/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-kube-api-access-5sn9k\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.262543 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.364096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.365478 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366180 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366366 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") pod \"d778948b-7654-48d1-8be2-edd924d70ad5\" (UID: \"d778948b-7654-48d1-8be2-edd924d70ad5\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366699 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.366899 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-scripts\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367049 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367219 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367600 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.367893 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-config\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.368034 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-scripts\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.368594 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-config\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.369132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sn9k\" (UniqueName: \"kubernetes.io/projected/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-kube-api-access-5sn9k\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.369530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.370741 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.373672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.373695 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.375028 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6" (OuterVolumeSpecName: "kube-api-access-cp4n6") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "kube-api-access-cp4n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.387626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sn9k\" (UniqueName: \"kubernetes.io/projected/32d5b8f0-b7c1-4eeb-9b49-85b0240d28df-kube-api-access-5sn9k\") pod \"ovn-northd-0\" (UID: \"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df\") " pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.389842 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.390340 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config" (OuterVolumeSpecName: "config") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.406027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d778948b-7654-48d1-8be2-edd924d70ad5" (UID: "d778948b-7654-48d1-8be2-edd924d70ad5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.473840 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.475224 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.475238 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d778948b-7654-48d1-8be2-edd924d70ad5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.475692 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4n6\" (UniqueName: \"kubernetes.io/projected/d778948b-7654-48d1-8be2-edd924d70ad5-kube-api-access-cp4n6\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.502928 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.821577 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerStarted","Data":"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.822980 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.826654 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" event={"ID":"d778948b-7654-48d1-8be2-edd924d70ad5","Type":"ContainerDied","Data":"0a0baa79c6fc4875db3db7fd55282035d68e8f6ffe2eabbfb7794111253d1158"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.826705 4769 scope.go:117] "RemoveContainer" containerID="590989faecf49e258b30df1b08b67d281dbed21a6eda2dd9637b8f2c675de2da" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.826867 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-5sxsl" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.833925 4769 generic.go:334] "Generic (PLEG): container finished" podID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerID="9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2" exitCode=0 Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.834027 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerDied","Data":"9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.834068 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" event={"ID":"9ccf209c-9829-41bd-af53-26ea82e6c9e0","Type":"ContainerDied","Data":"25c320cddf3aa10b554d2c87ef85148faa26e18a085d0ac5f86a88df32d73795"} Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.834083 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c320cddf3aa10b554d2c87ef85148faa26e18a085d0ac5f86a88df32d73795" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.840769 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.846288 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-twczw" podStartSLOduration=2.8462519459999998 podStartE2EDuration="2.846251946s" podCreationTimestamp="2026-01-22 14:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:31.841072721 +0000 UTC m=+1011.252182650" watchObservedRunningTime="2026-01-22 14:00:31.846251946 +0000 UTC m=+1011.257361875" Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.905626 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.924867 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-5sxsl"] Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.991614 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") pod \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.991762 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") pod \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " Jan 22 14:00:31 crc kubenswrapper[4769]: I0122 14:00:31.991816 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") pod \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\" (UID: \"9ccf209c-9829-41bd-af53-26ea82e6c9e0\") " Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.002930 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd" (OuterVolumeSpecName: "kube-api-access-6fpmd") pod "9ccf209c-9829-41bd-af53-26ea82e6c9e0" (UID: "9ccf209c-9829-41bd-af53-26ea82e6c9e0"). InnerVolumeSpecName "kube-api-access-6fpmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.047393 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ccf209c-9829-41bd-af53-26ea82e6c9e0" (UID: "9ccf209c-9829-41bd-af53-26ea82e6c9e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.048297 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config" (OuterVolumeSpecName: "config") pod "9ccf209c-9829-41bd-af53-26ea82e6c9e0" (UID: "9ccf209c-9829-41bd-af53-26ea82e6c9e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094472 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094631 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fpmd\" (UniqueName: \"kubernetes.io/projected/9ccf209c-9829-41bd-af53-26ea82e6c9e0-kube-api-access-6fpmd\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094649 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.094662 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ccf209c-9829-41bd-af53-26ea82e6c9e0-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:32 crc kubenswrapper[4769]: E0122 14:00:32.094775 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:32 crc kubenswrapper[4769]: E0122 14:00:32.094811 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:32 crc kubenswrapper[4769]: E0122 14:00:32.094862 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:36.094843071 +0000 UTC m=+1015.505953000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.111263 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.840500 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-5f5mt" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.869236 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.874282 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-5f5mt"] Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.891257 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" path="/var/lib/kubelet/pods/9ccf209c-9829-41bd-af53-26ea82e6c9e0/volumes" Jan 22 14:00:32 crc kubenswrapper[4769]: I0122 14:00:32.892001 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" path="/var/lib/kubelet/pods/d778948b-7654-48d1-8be2-edd924d70ad5/volumes" Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.171181 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.172015 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.300336 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 22 14:00:33 crc kubenswrapper[4769]: W0122 14:00:33.904028 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32d5b8f0_b7c1_4eeb_9b49_85b0240d28df.slice/crio-318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa WatchSource:0}: Error finding container 318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa: Status 404 returned error can't find the container with id 318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa Jan 22 14:00:33 crc kubenswrapper[4769]: I0122 14:00:33.943552 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.454514 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:00:34 crc kubenswrapper[4769]: E0122 14:00:34.455461 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.455602 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: E0122 14:00:34.455716 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.455817 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: E0122 14:00:34.455985 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.456084 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.456354 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d778948b-7654-48d1-8be2-edd924d70ad5" containerName="init" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.456756 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccf209c-9829-41bd-af53-26ea82e6c9e0" containerName="dnsmasq-dns" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.457678 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.460066 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.468844 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.520833 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.522195 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.529329 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.530783 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.530877 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.561772 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.561861 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.600434 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662874 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.662952 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.663941 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.683526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"keystone-0c5f-account-create-update-dbzd4\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.764209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.764267 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.767341 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.781250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"keystone-db-create-mw8m7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.806616 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.845236 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.846464 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.850672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.856573 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.861398 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df","Type":"ContainerStarted","Data":"318d9796eafe0b5c1a57f86c20f2fd8829205dddd1fe1281e8300053bfa894aa"} Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.869204 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.870471 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.874391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875457 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875658 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875824 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.875899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.937645 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.977761 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.977903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.978007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.978145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.978966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.981981 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:34 crc kubenswrapper[4769]: I0122 14:00:34.997253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"placement-a329-account-create-update-5dtjs\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.080017 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.096091 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"placement-db-create-7q976\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " pod="openstack/placement-db-create-7q976" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.178398 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.196255 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.805672 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:00:35 crc kubenswrapper[4769]: W0122 14:00:35.817870 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e5e1134_cb08_4676_b40b_5e05af038ec7.slice/crio-aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb WatchSource:0}: Error finding container aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb: Status 404 returned error can't find the container with id aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.872809 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mw8m7" event={"ID":"8e5e1134-cb08-4676-b40b-5e05af038ec7","Type":"ContainerStarted","Data":"aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb"} Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.878920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerStarted","Data":"b3f6458924f57ce2e0a8e81626e83771a68f1ce1972979549e1eea8a213c5566"} Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.906180 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jmhxf" podStartSLOduration=2.095535902 podStartE2EDuration="7.906156633s" podCreationTimestamp="2026-01-22 14:00:28 +0000 UTC" firstStartedPulling="2026-01-22 14:00:29.658120649 +0000 UTC m=+1009.069230578" lastFinishedPulling="2026-01-22 14:00:35.46874138 +0000 UTC m=+1014.879851309" observedRunningTime="2026-01-22 14:00:35.899849447 +0000 UTC m=+1015.310959386" watchObservedRunningTime="2026-01-22 14:00:35.906156633 +0000 UTC m=+1015.317266562" Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.922777 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:00:35 crc kubenswrapper[4769]: W0122 14:00:35.928027 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod257149e5_e0f3_4721_9329_6c119ce91192.slice/crio-dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868 WatchSource:0}: Error finding container dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868: Status 404 returned error can't find the container with id dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868 Jan 22 14:00:35 crc kubenswrapper[4769]: I0122 14:00:35.988208 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.029045 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.101830 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:36 crc kubenswrapper[4769]: E0122 14:00:36.102167 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 22 14:00:36 crc kubenswrapper[4769]: E0122 14:00:36.103130 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 22 14:00:36 crc kubenswrapper[4769]: E0122 14:00:36.103205 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift podName:ce65dba3-22b9-482f-b3da-2f4705468ea4 nodeName:}" failed. No retries permitted until 2026-01-22 14:00:44.103179226 +0000 UTC m=+1023.514289155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift") pod "swift-storage-0" (UID: "ce65dba3-22b9-482f-b3da-2f4705468ea4") : configmap "swift-ring-files" not found Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.884003 4769 generic.go:334] "Generic (PLEG): container finished" podID="257149e5-e0f3-4721-9329-6c119ce91192" containerID="c074e42ca3ff188c7761b8f55de35192aed9fef36fdef20a8193ec2013468312" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.885760 4769 generic.go:334] "Generic (PLEG): container finished" podID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerID="41ccd1233986e7a4c125219fe7adea8a9635992e6e64e942e038414ae80cde80" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892217 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7q976" event={"ID":"257149e5-e0f3-4721-9329-6c119ce91192","Type":"ContainerDied","Data":"c074e42ca3ff188c7761b8f55de35192aed9fef36fdef20a8193ec2013468312"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892276 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7q976" event={"ID":"257149e5-e0f3-4721-9329-6c119ce91192","Type":"ContainerStarted","Data":"dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892293 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c5f-account-create-update-dbzd4" event={"ID":"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387","Type":"ContainerDied","Data":"41ccd1233986e7a4c125219fe7adea8a9635992e6e64e942e038414ae80cde80"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892353 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c5f-account-create-update-dbzd4" event={"ID":"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387","Type":"ContainerStarted","Data":"0806411dbac78855277ccd8aae65453370b85fb1ff508ae26217b4b63474dfa8"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892427 4769 generic.go:334] "Generic (PLEG): container finished" podID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerID="97b2836a40fe3718dc9876ac751e671d98460d0371e12f643bc7ac498b12c4d8" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.892510 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mw8m7" event={"ID":"8e5e1134-cb08-4676-b40b-5e05af038ec7","Type":"ContainerDied","Data":"97b2836a40fe3718dc9876ac751e671d98460d0371e12f643bc7ac498b12c4d8"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.900570 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df","Type":"ContainerStarted","Data":"eca21d7f6c008a3ab3bd6cd8c6674138b1a4d1736d28bbd57680bff23218d7c6"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.903942 4769 generic.go:334] "Generic (PLEG): container finished" podID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerID="76ee9e3f92bd4b52916160b7315f6f1bcae498478a919fab65490233e1c3a657" exitCode=0 Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.904009 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a329-account-create-update-5dtjs" event={"ID":"46ca4e3b-a376-4f54-88c0-75d4a912d489","Type":"ContainerDied","Data":"76ee9e3f92bd4b52916160b7315f6f1bcae498478a919fab65490233e1c3a657"} Jan 22 14:00:36 crc kubenswrapper[4769]: I0122 14:00:36.904031 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a329-account-create-update-5dtjs" event={"ID":"46ca4e3b-a376-4f54-88c0-75d4a912d489","Type":"ContainerStarted","Data":"5599ff455012fd2651b3f2b0c6e96e5330d4661239d31dd6a13c19c8874810a4"} Jan 22 14:00:37 crc kubenswrapper[4769]: I0122 14:00:37.917769 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"32d5b8f0-b7c1-4eeb-9b49-85b0240d28df","Type":"ContainerStarted","Data":"0568f0a3041dc122b247608db4fda9697a3ce9446474bc9931c7396300943a5b"} Jan 22 14:00:37 crc kubenswrapper[4769]: I0122 14:00:37.918070 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 22 14:00:37 crc kubenswrapper[4769]: I0122 14:00:37.947407 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.239277149 podStartE2EDuration="6.947374781s" podCreationTimestamp="2026-01-22 14:00:31 +0000 UTC" firstStartedPulling="2026-01-22 14:00:33.906774201 +0000 UTC m=+1013.317884130" lastFinishedPulling="2026-01-22 14:00:36.614871833 +0000 UTC m=+1016.025981762" observedRunningTime="2026-01-22 14:00:37.935513539 +0000 UTC m=+1017.346623478" watchObservedRunningTime="2026-01-22 14:00:37.947374781 +0000 UTC m=+1017.358484710" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.354934 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.443114 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") pod \"257149e5-e0f3-4721-9329-6c119ce91192\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.443171 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") pod \"257149e5-e0f3-4721-9329-6c119ce91192\" (UID: \"257149e5-e0f3-4721-9329-6c119ce91192\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.443697 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "257149e5-e0f3-4721-9329-6c119ce91192" (UID: "257149e5-e0f3-4721-9329-6c119ce91192"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.449771 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9" (OuterVolumeSpecName: "kube-api-access-dwkh9") pod "257149e5-e0f3-4721-9329-6c119ce91192" (UID: "257149e5-e0f3-4721-9329-6c119ce91192"). InnerVolumeSpecName "kube-api-access-dwkh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.519261 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.529231 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.544724 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwkh9\" (UniqueName: \"kubernetes.io/projected/257149e5-e0f3-4721-9329-6c119ce91192-kube-api-access-dwkh9\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.544760 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/257149e5-e0f3-4721-9329-6c119ce91192-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.545419 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646086 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") pod \"46ca4e3b-a376-4f54-88c0-75d4a912d489\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646159 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") pod \"8e5e1134-cb08-4676-b40b-5e05af038ec7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646340 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") pod \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646362 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") pod \"46ca4e3b-a376-4f54-88c0-75d4a912d489\" (UID: \"46ca4e3b-a376-4f54-88c0-75d4a912d489\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646430 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") pod \"8e5e1134-cb08-4676-b40b-5e05af038ec7\" (UID: \"8e5e1134-cb08-4676-b40b-5e05af038ec7\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.646455 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") pod \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\" (UID: \"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387\") " Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.647006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e5e1134-cb08-4676-b40b-5e05af038ec7" (UID: "8e5e1134-cb08-4676-b40b-5e05af038ec7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.647047 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46ca4e3b-a376-4f54-88c0-75d4a912d489" (UID: "46ca4e3b-a376-4f54-88c0-75d4a912d489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.648887 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" (UID: "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.649735 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k" (OuterVolumeSpecName: "kube-api-access-gl85k") pod "46ca4e3b-a376-4f54-88c0-75d4a912d489" (UID: "46ca4e3b-a376-4f54-88c0-75d4a912d489"). InnerVolumeSpecName "kube-api-access-gl85k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.650446 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2" (OuterVolumeSpecName: "kube-api-access-bjlc2") pod "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" (UID: "bced8c79-d4b4-42dc-ba19-a4ba1eeb4387"). InnerVolumeSpecName "kube-api-access-bjlc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.651571 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm" (OuterVolumeSpecName: "kube-api-access-lf8bm") pod "8e5e1134-cb08-4676-b40b-5e05af038ec7" (UID: "8e5e1134-cb08-4676-b40b-5e05af038ec7"). InnerVolumeSpecName "kube-api-access-lf8bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748177 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjlc2\" (UniqueName: \"kubernetes.io/projected/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-kube-api-access-bjlc2\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748214 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46ca4e3b-a376-4f54-88c0-75d4a912d489-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748225 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e5e1134-cb08-4676-b40b-5e05af038ec7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748236 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748250 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl85k\" (UniqueName: \"kubernetes.io/projected/46ca4e3b-a376-4f54-88c0-75d4a912d489-kube-api-access-gl85k\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.748263 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8bm\" (UniqueName: \"kubernetes.io/projected/8e5e1134-cb08-4676-b40b-5e05af038ec7-kube-api-access-lf8bm\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.927684 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7q976" event={"ID":"257149e5-e0f3-4721-9329-6c119ce91192","Type":"ContainerDied","Data":"dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.927726 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd1cb91be4ead0d1232743d3eb938c2081f310049bb6a53aa884f832a09a868" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.927734 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7q976" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.930350 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c5f-account-create-update-dbzd4" event={"ID":"bced8c79-d4b4-42dc-ba19-a4ba1eeb4387","Type":"ContainerDied","Data":"0806411dbac78855277ccd8aae65453370b85fb1ff508ae26217b4b63474dfa8"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.930500 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0806411dbac78855277ccd8aae65453370b85fb1ff508ae26217b4b63474dfa8" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.930664 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c5f-account-create-update-dbzd4" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.933097 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mw8m7" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.933096 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mw8m7" event={"ID":"8e5e1134-cb08-4676-b40b-5e05af038ec7","Type":"ContainerDied","Data":"aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.933441 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeb0990d033e2bd5d75575962246340f82522b4363e6604461826d0c90f386cb" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.934849 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a329-account-create-update-5dtjs" Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.935237 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a329-account-create-update-5dtjs" event={"ID":"46ca4e3b-a376-4f54-88c0-75d4a912d489","Type":"ContainerDied","Data":"5599ff455012fd2651b3f2b0c6e96e5330d4661239d31dd6a13c19c8874810a4"} Jan 22 14:00:38 crc kubenswrapper[4769]: I0122 14:00:38.935288 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5599ff455012fd2651b3f2b0c6e96e5330d4661239d31dd6a13c19c8874810a4" Jan 22 14:00:39 crc kubenswrapper[4769]: I0122 14:00:39.784760 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:00:39 crc kubenswrapper[4769]: I0122 14:00:39.835277 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 14:00:39 crc kubenswrapper[4769]: I0122 14:00:39.835849 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" containerID="cri-o://8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" gracePeriod=10 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.019955 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020375 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257149e5-e0f3-4721-9329-6c119ce91192" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020392 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="257149e5-e0f3-4721-9329-6c119ce91192" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020427 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020436 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020457 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020465 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: E0122 14:00:40.020486 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020493 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020707 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020728 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="257149e5-e0f3-4721-9329-6c119ce91192" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020741 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" containerName="mariadb-account-create-update" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.020762 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" containerName="mariadb-database-create" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.021403 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.027704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.129878 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.131494 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.133232 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.139528 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.189531 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.189782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291587 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291680 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.291727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.292990 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.309286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"glance-db-create-dxwjl\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.354336 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.392828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.392921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.393647 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.420481 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"glance-b906-account-create-update-rndmt\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.548178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.670554 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:00:40 crc kubenswrapper[4769]: W0122 14:00:40.696928 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb909a789_674d_40ba_b332_700e27464966.slice/crio-0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944 WatchSource:0}: Error finding container 0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944: Status 404 returned error can't find the container with id 0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.962721 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.962977 4769 generic.go:334] "Generic (PLEG): container finished" podID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" exitCode=0 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.963075 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerDied","Data":"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.963608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" event={"ID":"b51a7d68-4414-4157-ab31-b5ee67a26b87","Type":"ContainerDied","Data":"cec75e0348d51bc91245a011b2511f0acd3a0ca2ec0f078a6f1e2f875edd2e6f"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.963633 4769 scope.go:117] "RemoveContainer" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.971873 4769 generic.go:334] "Generic (PLEG): container finished" podID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerID="02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d" exitCode=0 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.971947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerDied","Data":"02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.979594 4769 generic.go:334] "Generic (PLEG): container finished" podID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerID="cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f" exitCode=0 Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.979681 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerDied","Data":"cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f"} Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.997309 4769 scope.go:117] "RemoveContainer" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" Jan 22 14:00:40 crc kubenswrapper[4769]: I0122 14:00:40.998221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dxwjl" event={"ID":"b909a789-674d-40ba-b332-700e27464966","Type":"ContainerStarted","Data":"0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944"} Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.043436 4769 scope.go:117] "RemoveContainer" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.044415 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41\": container with ID starting with 8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41 not found: ID does not exist" containerID="8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.044453 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41"} err="failed to get container status \"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41\": rpc error: code = NotFound desc = could not find container \"8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41\": container with ID starting with 8a49eca2021a2295ffe88f33f58659f6911edf81dd9a4c1261422569e89aab41 not found: ID does not exist" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.044476 4769 scope.go:117] "RemoveContainer" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.045704 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895\": container with ID starting with ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895 not found: ID does not exist" containerID="ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.045759 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895"} err="failed to get container status \"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895\": rpc error: code = NotFound desc = could not find container \"ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895\": container with ID starting with ee9898fa7e974bc9f074358f6748677719c62c630a7913b53ab6b56932e4d895 not found: ID does not exist" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.065100 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-dxwjl" podStartSLOduration=1.065081956 podStartE2EDuration="1.065081956s" podCreationTimestamp="2026-01-22 14:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:41.059531881 +0000 UTC m=+1020.470641820" watchObservedRunningTime="2026-01-22 14:00:41.065081956 +0000 UTC m=+1020.476191885" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.109984 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") pod \"b51a7d68-4414-4157-ab31-b5ee67a26b87\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.110112 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") pod \"b51a7d68-4414-4157-ab31-b5ee67a26b87\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.110865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") pod \"b51a7d68-4414-4157-ab31-b5ee67a26b87\" (UID: \"b51a7d68-4414-4157-ab31-b5ee67a26b87\") " Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.115445 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6" (OuterVolumeSpecName: "kube-api-access-rjtk6") pod "b51a7d68-4414-4157-ab31-b5ee67a26b87" (UID: "b51a7d68-4414-4157-ab31-b5ee67a26b87"). InnerVolumeSpecName "kube-api-access-rjtk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.160257 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config" (OuterVolumeSpecName: "config") pod "b51a7d68-4414-4157-ab31-b5ee67a26b87" (UID: "b51a7d68-4414-4157-ab31-b5ee67a26b87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.162359 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b51a7d68-4414-4157-ab31-b5ee67a26b87" (UID: "b51a7d68-4414-4157-ab31-b5ee67a26b87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.185753 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.195223 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.213866 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.213900 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b51a7d68-4414-4157-ab31-b5ee67a26b87-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.213916 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjtk6\" (UniqueName: \"kubernetes.io/projected/b51a7d68-4414-4157-ab31-b5ee67a26b87-kube-api-access-rjtk6\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.766883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.767290 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="init" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.767311 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="init" Jan 22 14:00:41 crc kubenswrapper[4769]: E0122 14:00:41.767329 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.767338 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.767526 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.768219 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.776248 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.779124 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.825694 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.826080 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.927639 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.927740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.928846 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:41 crc kubenswrapper[4769]: I0122 14:00:41.949442 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"root-account-create-update-wfphv\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.006221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerStarted","Data":"49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.006581 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.008274 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerStarted","Data":"401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.008550 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.009634 4769 generic.go:334] "Generic (PLEG): container finished" podID="b909a789-674d-40ba-b332-700e27464966" containerID="fb2e3c339083927502fb6cea262472f4288b04764f08eec3cbd1e7e2b61cc67d" exitCode=0 Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.009676 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dxwjl" event={"ID":"b909a789-674d-40ba-b332-700e27464966","Type":"ContainerDied","Data":"fb2e3c339083927502fb6cea262472f4288b04764f08eec3cbd1e7e2b61cc67d"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.010979 4769 generic.go:334] "Generic (PLEG): container finished" podID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerID="8c802b2b696d681ed9980b953b8105bed5cefd906bb042dcf0b8c4943c91185b" exitCode=0 Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.011050 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b906-account-create-update-rndmt" event={"ID":"73fd3df5-6e83-4893-9368-66c1ba35155a","Type":"ContainerDied","Data":"8c802b2b696d681ed9980b953b8105bed5cefd906bb042dcf0b8c4943c91185b"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.011072 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.011086 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b906-account-create-update-rndmt" event={"ID":"73fd3df5-6e83-4893-9368-66c1ba35155a","Type":"ContainerStarted","Data":"935a6bc520b697a1a8e7658924bf97f8f46c6f788a0b1b218816dcc36fbdabae"} Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.037413 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.818238729 podStartE2EDuration="52.037392355s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 13:59:55.746910833 +0000 UTC m=+975.158020772" lastFinishedPulling="2026-01-22 14:00:06.966064469 +0000 UTC m=+986.377174398" observedRunningTime="2026-01-22 14:00:42.037275252 +0000 UTC m=+1021.448385201" watchObservedRunningTime="2026-01-22 14:00:42.037392355 +0000 UTC m=+1021.448502284" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.057309 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.443146339 podStartE2EDuration="52.057285376s" podCreationTimestamp="2026-01-22 13:59:50 +0000 UTC" firstStartedPulling="2026-01-22 14:00:05.356158093 +0000 UTC m=+984.767268032" lastFinishedPulling="2026-01-22 14:00:06.97029713 +0000 UTC m=+986.381407069" observedRunningTime="2026-01-22 14:00:42.056714292 +0000 UTC m=+1021.467824251" watchObservedRunningTime="2026-01-22 14:00:42.057285376 +0000 UTC m=+1021.468395305" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.106684 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.115683 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.129191 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-qvqgs"] Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.502620 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:42 crc kubenswrapper[4769]: I0122 14:00:42.917619 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" path="/var/lib/kubelet/pods/b51a7d68-4414-4157-ab31-b5ee67a26b87/volumes" Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.024209 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerStarted","Data":"ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04"} Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.024249 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerStarted","Data":"828388dbceea74a9e45af3dfa3b37a9d86c0474f3d9ccae8f2a66ad1959e6c99"} Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.056097 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-wfphv" podStartSLOduration=2.056071478 podStartE2EDuration="2.056071478s" podCreationTimestamp="2026-01-22 14:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:00:43.047389491 +0000 UTC m=+1022.458499430" watchObservedRunningTime="2026-01-22 14:00:43.056071478 +0000 UTC m=+1022.467181407" Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.820744 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:43 crc kubenswrapper[4769]: I0122 14:00:43.826874 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.004814 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") pod \"b909a789-674d-40ba-b332-700e27464966\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.004918 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") pod \"73fd3df5-6e83-4893-9368-66c1ba35155a\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.004992 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") pod \"b909a789-674d-40ba-b332-700e27464966\" (UID: \"b909a789-674d-40ba-b332-700e27464966\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.005068 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") pod \"73fd3df5-6e83-4893-9368-66c1ba35155a\" (UID: \"73fd3df5-6e83-4893-9368-66c1ba35155a\") " Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.005698 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73fd3df5-6e83-4893-9368-66c1ba35155a" (UID: "73fd3df5-6e83-4893-9368-66c1ba35155a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.006121 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73fd3df5-6e83-4893-9368-66c1ba35155a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.006178 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b909a789-674d-40ba-b332-700e27464966" (UID: "b909a789-674d-40ba-b332-700e27464966"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.010517 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75" (OuterVolumeSpecName: "kube-api-access-n5c75") pod "73fd3df5-6e83-4893-9368-66c1ba35155a" (UID: "73fd3df5-6e83-4893-9368-66c1ba35155a"). InnerVolumeSpecName "kube-api-access-n5c75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.010893 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb" (OuterVolumeSpecName: "kube-api-access-th6kb") pod "b909a789-674d-40ba-b332-700e27464966" (UID: "b909a789-674d-40ba-b332-700e27464966"). InnerVolumeSpecName "kube-api-access-th6kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.034101 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dxwjl" event={"ID":"b909a789-674d-40ba-b332-700e27464966","Type":"ContainerDied","Data":"0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944"} Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.034145 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a081e86b015573dfa11d971cf861b68ff7a7bd2a89aa9d93058fbab522b6944" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.034117 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dxwjl" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.036294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b906-account-create-update-rndmt" event={"ID":"73fd3df5-6e83-4893-9368-66c1ba35155a","Type":"ContainerDied","Data":"935a6bc520b697a1a8e7658924bf97f8f46c6f788a0b1b218816dcc36fbdabae"} Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.036331 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935a6bc520b697a1a8e7658924bf97f8f46c6f788a0b1b218816dcc36fbdabae" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.036384 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b906-account-create-update-rndmt" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.047304 4769 generic.go:334] "Generic (PLEG): container finished" podID="4195c73b-d10a-4b39-ad10-1da9502af686" containerID="ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04" exitCode=0 Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.047343 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerDied","Data":"ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04"} Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107733 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b909a789-674d-40ba-b332-700e27464966-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107751 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5c75\" (UniqueName: \"kubernetes.io/projected/73fd3df5-6e83-4893-9368-66c1ba35155a-kube-api-access-n5c75\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.107766 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th6kb\" (UniqueName: \"kubernetes.io/projected/b909a789-674d-40ba-b332-700e27464966-kube-api-access-th6kb\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.112117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ce65dba3-22b9-482f-b3da-2f4705468ea4-etc-swift\") pod \"swift-storage-0\" (UID: \"ce65dba3-22b9-482f-b3da-2f4705468ea4\") " pod="openstack/swift-storage-0" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.196125 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 22 14:00:44 crc kubenswrapper[4769]: I0122 14:00:44.753651 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 22 14:00:44 crc kubenswrapper[4769]: W0122 14:00:44.760842 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce65dba3_22b9_482f_b3da_2f4705468ea4.slice/crio-d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7 WatchSource:0}: Error finding container d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7: Status 404 returned error can't find the container with id d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7 Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.057762 4769 generic.go:334] "Generic (PLEG): container finished" podID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerID="b3f6458924f57ce2e0a8e81626e83771a68f1ce1972979549e1eea8a213c5566" exitCode=0 Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.057828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerDied","Data":"b3f6458924f57ce2e0a8e81626e83771a68f1ce1972979549e1eea8a213c5566"} Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.058967 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"d0571332421a1f6f93bec883afeb30fa53efe7aa65d653ea5843811e401aafa7"} Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.253027 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ljbrk" podUID="db7ce269-d7ec-4db1-aab3-b22da5d56c6e" containerName="ovn-controller" probeResult="failure" output=< Jan 22 14:00:45 crc kubenswrapper[4769]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 14:00:45 crc kubenswrapper[4769]: > Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.257582 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335093 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:00:45 crc kubenswrapper[4769]: E0122 14:00:45.335422 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b909a789-674d-40ba-b332-700e27464966" containerName="mariadb-database-create" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335438 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b909a789-674d-40ba-b332-700e27464966" containerName="mariadb-database-create" Jan 22 14:00:45 crc kubenswrapper[4769]: E0122 14:00:45.335447 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerName="mariadb-account-create-update" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335453 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerName="mariadb-account-create-update" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335619 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" containerName="mariadb-account-create-update" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.335663 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b909a789-674d-40ba-b332-700e27464966" containerName="mariadb-database-create" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.336292 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.338186 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.338268 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-khhk4" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.362996 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.418284 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.528665 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") pod \"4195c73b-d10a-4b39-ad10-1da9502af686\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.528772 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") pod \"4195c73b-d10a-4b39-ad10-1da9502af686\" (UID: \"4195c73b-d10a-4b39-ad10-1da9502af686\") " Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529067 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529111 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529578 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529599 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4195c73b-d10a-4b39-ad10-1da9502af686" (UID: "4195c73b-d10a-4b39-ad10-1da9502af686"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529777 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.529981 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4195c73b-d10a-4b39-ad10-1da9502af686-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.543005 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4" (OuterVolumeSpecName: "kube-api-access-g7ks4") pod "4195c73b-d10a-4b39-ad10-1da9502af686" (UID: "4195c73b-d10a-4b39-ad10-1da9502af686"). InnerVolumeSpecName "kube-api-access-g7ks4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.631372 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.631806 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.631942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.632063 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.632350 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7ks4\" (UniqueName: \"kubernetes.io/projected/4195c73b-d10a-4b39-ad10-1da9502af686-kube-api-access-g7ks4\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.636398 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.638655 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.648166 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.654221 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"glance-db-sync-t9sxw\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.716076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:00:45 crc kubenswrapper[4769]: I0122 14:00:45.853348 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-qvqgs" podUID="b51a7d68-4414-4157-ab31-b5ee67a26b87" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: i/o timeout" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.072437 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wfphv" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.072493 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wfphv" event={"ID":"4195c73b-d10a-4b39-ad10-1da9502af686","Type":"ContainerDied","Data":"828388dbceea74a9e45af3dfa3b37a9d86c0474f3d9ccae8f2a66ad1959e6c99"} Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.072527 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="828388dbceea74a9e45af3dfa3b37a9d86c0474f3d9ccae8f2a66ad1959e6c99" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.374062 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.386015 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547451 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547728 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547772 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547843 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547872 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547934 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.547977 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") pod \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\" (UID: \"f13b9a7b-6f5e-48fd-8d95-3beb851e9819\") " Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.548823 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.548932 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.553862 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx" (OuterVolumeSpecName: "kube-api-access-tzwjx") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "kube-api-access-tzwjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.559484 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.568706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts" (OuterVolumeSpecName: "scripts") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.570517 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.578898 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.581443 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f13b9a7b-6f5e-48fd-8d95-3beb851e9819" (UID: "f13b9a7b-6f5e-48fd-8d95-3beb851e9819"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:00:46 crc kubenswrapper[4769]: W0122 14:00:46.610155 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4b4ca8a_8b9e_48d2_9208_fecb2bc9a299.slice/crio-a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8 WatchSource:0}: Error finding container a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8: Status 404 returned error can't find the container with id a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8 Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.649904 4769 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650396 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzwjx\" (UniqueName: \"kubernetes.io/projected/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-kube-api-access-tzwjx\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650431 4769 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650442 4769 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650451 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650461 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:46 crc kubenswrapper[4769]: I0122 14:00:46.650469 4769 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f13b9a7b-6f5e-48fd-8d95-3beb851e9819-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.094508 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"462869d558d9f49e29a5a34141e78fcd0c96ffa63f8f76014c23c4c843c4850e"} Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.097032 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerStarted","Data":"a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8"} Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.098197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jmhxf" event={"ID":"f13b9a7b-6f5e-48fd-8d95-3beb851e9819","Type":"ContainerDied","Data":"895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0"} Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.098218 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="895da75304cec8858b8075e7a5265e609df985988010f8eef12f9027143cb2a0" Jan 22 14:00:47 crc kubenswrapper[4769]: I0122 14:00:47.098269 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jmhxf" Jan 22 14:00:48 crc kubenswrapper[4769]: I0122 14:00:48.202783 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:48 crc kubenswrapper[4769]: I0122 14:00:48.208403 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wfphv"] Jan 22 14:00:48 crc kubenswrapper[4769]: I0122 14:00:48.893378 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" path="/var/lib/kubelet/pods/4195c73b-d10a-4b39-ad10-1da9502af686/volumes" Jan 22 14:00:49 crc kubenswrapper[4769]: I0122 14:00:49.117959 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"cbed28ee7193a910f0117dd368d60a4c91d6b9d9d61d79dc2ecdcbfffee73505"} Jan 22 14:00:49 crc kubenswrapper[4769]: I0122 14:00:49.118013 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"a991deb8e0631fde737a00e78149f3287c197258880e7a783e51be05b94e29ed"} Jan 22 14:00:49 crc kubenswrapper[4769]: I0122 14:00:49.118025 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"080f07f1f2a1ecfffedaf2446036b625e39e4b70c7f389faf7370852330f240e"} Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.261106 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ljbrk" podUID="db7ce269-d7ec-4db1-aab3-b22da5d56c6e" containerName="ovn-controller" probeResult="failure" output=< Jan 22 14:00:50 crc kubenswrapper[4769]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 22 14:00:50 crc kubenswrapper[4769]: > Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.275885 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-57w6l" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.466664 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:00:50 crc kubenswrapper[4769]: E0122 14:00:50.467054 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerName="swift-ring-rebalance" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467074 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerName="swift-ring-rebalance" Jan 22 14:00:50 crc kubenswrapper[4769]: E0122 14:00:50.467089 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" containerName="mariadb-account-create-update" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467097 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" containerName="mariadb-account-create-update" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467242 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13b9a7b-6f5e-48fd-8d95-3beb851e9819" containerName="swift-ring-rebalance" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.467262 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4195c73b-d10a-4b39-ad10-1da9502af686" containerName="mariadb-account-create-update" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.474435 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.478140 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.482218 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.609832 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610244 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610501 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610570 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.610603 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712407 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712462 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712500 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712530 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712550 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712593 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.712632 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.713409 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.714895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.743915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"ovn-controller-ljbrk-config-7j6lk\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:50 crc kubenswrapper[4769]: I0122 14:00:50.791558 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:00:51 crc kubenswrapper[4769]: I0122 14:00:51.377696 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:00:51 crc kubenswrapper[4769]: I0122 14:00:51.683550 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 22 14:00:52 crc kubenswrapper[4769]: I0122 14:00:52.066952 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 22 14:00:52 crc kubenswrapper[4769]: I0122 14:00:52.226057 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"e10c907382d8831a559c4c4d89a46a697c4000033721f39c48f631e0c0364cec"} Jan 22 14:00:52 crc kubenswrapper[4769]: I0122 14:00:52.233565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk-config-7j6lk" event={"ID":"21361871-15c6-44f4-ac22-d7765d9633a0","Type":"ContainerStarted","Data":"b5d78f1ed84da206017ea26712a6fdf2d29db5a0dadb912232656c12c0e54e3b"} Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.221525 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.222934 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.229417 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.231185 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.360918 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.361164 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.462863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.462984 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.463759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.484207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"root-account-create-update-trlj5\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:53 crc kubenswrapper[4769]: I0122 14:00:53.576868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.108906 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:00:54 crc kubenswrapper[4769]: W0122 14:00:54.121189 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4521e7ce_1245_4a18_9179_83a2b288e227.slice/crio-3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1 WatchSource:0}: Error finding container 3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1: Status 404 returned error can't find the container with id 3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1 Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.257334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"9a1f50309aeee1040bdd92b3e5ea00d03944cbae5a44744e87efcb265d3a7b37"} Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.257651 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"3737f1d391e2fecf42f301060c8c5c1da63f3f2e5806e23f7d670983f57e9dec"} Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.258962 4769 generic.go:334] "Generic (PLEG): container finished" podID="21361871-15c6-44f4-ac22-d7765d9633a0" containerID="1df5bb57a2b37a726deb06ee2a4311afcd91a86d912ad8365dad00a8584aad2b" exitCode=0 Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.258998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk-config-7j6lk" event={"ID":"21361871-15c6-44f4-ac22-d7765d9633a0","Type":"ContainerDied","Data":"1df5bb57a2b37a726deb06ee2a4311afcd91a86d912ad8365dad00a8584aad2b"} Jan 22 14:00:54 crc kubenswrapper[4769]: I0122 14:00:54.260393 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trlj5" event={"ID":"4521e7ce-1245-4a18-9179-83a2b288e227","Type":"ContainerStarted","Data":"3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1"} Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.262565 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ljbrk" Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.270201 4769 generic.go:334] "Generic (PLEG): container finished" podID="4521e7ce-1245-4a18-9179-83a2b288e227" containerID="09178c7f0f25de3bb2d0040621da54e6d9636a7e539ca3291149727833705d8f" exitCode=0 Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.270256 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trlj5" event={"ID":"4521e7ce-1245-4a18-9179-83a2b288e227","Type":"ContainerDied","Data":"09178c7f0f25de3bb2d0040621da54e6d9636a7e539ca3291149727833705d8f"} Jan 22 14:00:55 crc kubenswrapper[4769]: I0122 14:00:55.276528 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"b88db4863753c30af904596d458016893c1fd2790bc4eea038c5fecef9c97bd9"} Jan 22 14:01:01 crc kubenswrapper[4769]: I0122 14:01:01.684276 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.006423 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.011580 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.055398 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:01:02 crc kubenswrapper[4769]: E0122 14:01:02.055866 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" containerName="mariadb-account-create-update" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.055883 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" containerName="mariadb-account-create-update" Jan 22 14:01:02 crc kubenswrapper[4769]: E0122 14:01:02.055905 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" containerName="ovn-config" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.055911 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" containerName="ovn-config" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.056076 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" containerName="ovn-config" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.056107 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" containerName="mariadb-account-create-update" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.056602 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.073391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.076916 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.122139 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.125285 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.128565 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.137147 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173099 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173161 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173214 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") pod \"4521e7ce-1245-4a18-9179-83a2b288e227\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173359 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173408 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173433 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173472 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") pod \"4521e7ce-1245-4a18-9179-83a2b288e227\" (UID: \"4521e7ce-1245-4a18-9179-83a2b288e227\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173513 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") pod \"21361871-15c6-44f4-ac22-d7765d9633a0\" (UID: \"21361871-15c6-44f4-ac22-d7765d9633a0\") " Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173843 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run" (OuterVolumeSpecName: "var-run") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.173939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174092 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174228 4769 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174468 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174494 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174699 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4521e7ce-1245-4a18-9179-83a2b288e227" (UID: "4521e7ce-1245-4a18-9179-83a2b288e227"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.174746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts" (OuterVolumeSpecName: "scripts") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.175938 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.185024 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz" (OuterVolumeSpecName: "kube-api-access-66zqz") pod "21361871-15c6-44f4-ac22-d7765d9633a0" (UID: "21361871-15c6-44f4-ac22-d7765d9633a0"). InnerVolumeSpecName "kube-api-access-66zqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.202247 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj" (OuterVolumeSpecName: "kube-api-access-8qhdj") pod "4521e7ce-1245-4a18-9179-83a2b288e227" (UID: "4521e7ce-1245-4a18-9179-83a2b288e227"). InnerVolumeSpecName "kube-api-access-8qhdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.215244 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.217025 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.226081 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.228614 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.231169 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.241831 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.253614 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.277631 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278063 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278104 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278139 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278276 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278315 4769 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278327 4769 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/21361871-15c6-44f4-ac22-d7765d9633a0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278336 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qhdj\" (UniqueName: \"kubernetes.io/projected/4521e7ce-1245-4a18-9179-83a2b288e227-kube-api-access-8qhdj\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278345 4769 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278354 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66zqz\" (UniqueName: \"kubernetes.io/projected/21361871-15c6-44f4-ac22-d7765d9633a0-kube-api-access-66zqz\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278363 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21361871-15c6-44f4-ac22-d7765d9633a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.278372 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4521e7ce-1245-4a18-9179-83a2b288e227-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.279458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.311267 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"cinder-db-create-7r9tp\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.342118 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ljbrk-config-7j6lk" event={"ID":"21361871-15c6-44f4-ac22-d7765d9633a0","Type":"ContainerDied","Data":"b5d78f1ed84da206017ea26712a6fdf2d29db5a0dadb912232656c12c0e54e3b"} Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.342155 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5d78f1ed84da206017ea26712a6fdf2d29db5a0dadb912232656c12c0e54e3b" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.342208 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ljbrk-config-7j6lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.351222 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-trlj5" event={"ID":"4521e7ce-1245-4a18-9179-83a2b288e227","Type":"ContainerDied","Data":"3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1"} Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.351262 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffc29455d0ce4e9188b4d45fbc23f46eb348eb47e7206213b55b1e9587c3ca1" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.351863 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-trlj5" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.374095 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380004 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380049 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380097 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380178 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.380213 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.381185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.381762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.402511 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"cinder-8372-account-create-update-lq4fn\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.403116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"barbican-db-create-5nx2t\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.430072 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.431123 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.435692 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.435931 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.436053 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.436416 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.439260 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.444752 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481126 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481225 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481278 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.481326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.482201 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.506875 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.507653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"barbican-8bb3-account-create-update-x6jhs\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.508005 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.510140 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.521778 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585445 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585554 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585595 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.585718 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.591087 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.591653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.605187 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"keystone-db-sync-r7c9w\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.617097 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.624870 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.626309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.632628 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.637935 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.686800 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.686888 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.686972 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.687037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.688093 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.711189 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"neutron-24cb-account-create-update-rtdf4\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.757594 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.789027 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.789158 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.790250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.807741 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"neutron-db-create-892lk\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " pod="openstack/neutron-db-create-892lk" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.832351 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.949418 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:01:02 crc kubenswrapper[4769]: I0122 14:01:02.969089 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.127632 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.169534 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.195353 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ljbrk-config-7j6lk"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.222745 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.266758 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.276640 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:01:03 crc kubenswrapper[4769]: W0122 14:01:03.311594 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb68cb3e_c079_4e87_ae9b_be93a2b8b80e.slice/crio-7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8 WatchSource:0}: Error finding container 7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8: Status 404 returned error can't find the container with id 7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8 Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.373617 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.399126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerStarted","Data":"d679f95f173487e55b7459bd3fc7f4540a679004c865d7f6767595d3d679ed77"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.405199 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerStarted","Data":"1398047490e7ad774844fcdd21d36eeaa7ef1a8b0e137e6b1405961ab26a58b1"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.411764 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24cb-account-create-update-rtdf4" event={"ID":"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e","Type":"ContainerStarted","Data":"7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.424425 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r9tp" event={"ID":"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0","Type":"ContainerStarted","Data":"53daaafba2df179129fbaee7564a7dbb0810bedc12982841f6922ce3e0b0c0bc"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.428533 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerStarted","Data":"2526f6d6abe9ddf1def4e75e6755fa98fa5b8f9ceae123095b211a7facde003a"} Jan 22 14:01:03 crc kubenswrapper[4769]: I0122 14:01:03.496118 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.438741 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerStarted","Data":"52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.441245 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerStarted","Data":"92baa55a546dc1edc3b0176ea083063e122cca726bb4af4e4e8f8b15d0ee43c7"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.442907 4769 generic.go:334] "Generic (PLEG): container finished" podID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerID="a23fe7e1f609804bd01eaf3b67aa868ecc07d3bf005fc4cf04bf270bb0eb13a4" exitCode=0 Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.442998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r9tp" event={"ID":"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0","Type":"ContainerDied","Data":"a23fe7e1f609804bd01eaf3b67aa868ecc07d3bf005fc4cf04bf270bb0eb13a4"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.446919 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerStarted","Data":"afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.450561 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerStarted","Data":"61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.456969 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerStarted","Data":"21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534"} Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.464175 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-5nx2t" podStartSLOduration=2.464156329 podStartE2EDuration="2.464156329s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:04.457653293 +0000 UTC m=+1043.868763212" watchObservedRunningTime="2026-01-22 14:01:04.464156329 +0000 UTC m=+1043.875266258" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.479666 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-t9sxw" podStartSLOduration=3.879451478 podStartE2EDuration="19.479630158s" podCreationTimestamp="2026-01-22 14:00:45 +0000 UTC" firstStartedPulling="2026-01-22 14:00:46.612382847 +0000 UTC m=+1026.023492786" lastFinishedPulling="2026-01-22 14:01:02.212561537 +0000 UTC m=+1041.623671466" observedRunningTime="2026-01-22 14:01:04.476039551 +0000 UTC m=+1043.887149490" watchObservedRunningTime="2026-01-22 14:01:04.479630158 +0000 UTC m=+1043.890740087" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.594228 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8bb3-account-create-update-x6jhs" podStartSLOduration=2.594208715 podStartE2EDuration="2.594208715s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:04.566082052 +0000 UTC m=+1043.977192001" watchObservedRunningTime="2026-01-22 14:01:04.594208715 +0000 UTC m=+1044.005318644" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.595051 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-8372-account-create-update-lq4fn" podStartSLOduration=2.595043568 podStartE2EDuration="2.595043568s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:04.594983637 +0000 UTC m=+1044.006093566" watchObservedRunningTime="2026-01-22 14:01:04.595043568 +0000 UTC m=+1044.006153497" Jan 22 14:01:04 crc kubenswrapper[4769]: I0122 14:01:04.939869 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21361871-15c6-44f4-ac22-d7765d9633a0" path="/var/lib/kubelet/pods/21361871-15c6-44f4-ac22-d7765d9633a0/volumes" Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.481290 4769 generic.go:334] "Generic (PLEG): container finished" podID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerID="afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.482745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerDied","Data":"afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.497715 4769 generic.go:334] "Generic (PLEG): container finished" podID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerID="21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.497869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerDied","Data":"21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.506812 4769 generic.go:334] "Generic (PLEG): container finished" podID="3d72603e-a10a-4490-8298-67db64d087fc" containerID="52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.506904 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerDied","Data":"52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.514431 4769 generic.go:334] "Generic (PLEG): container finished" podID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerID="77def06c9daefb086f0355ee46072f20bab89a75ed5e0bf4dc001c469ff25434" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.514501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-892lk" event={"ID":"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9","Type":"ContainerDied","Data":"77def06c9daefb086f0355ee46072f20bab89a75ed5e0bf4dc001c469ff25434"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.514535 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-892lk" event={"ID":"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9","Type":"ContainerStarted","Data":"922a37c04813d1f740b1b1fafb93a43831f287f7e26c6b8164075378950823fd"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.528056 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"897d1b1eae0db3bae6b6a31c80b43bd0fb6f29d261414e341a479ba8ba030026"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.528103 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"8930b3c902daeb8b565ad6483914b78f7d642a6ddf06936626825b35c7fa4dff"} Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.529579 4769 generic.go:334] "Generic (PLEG): container finished" podID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerID="9adc3b6e5ed26c0015ab034169ba62530ada71abb392698e2ee878b4e52729c9" exitCode=0 Jan 22 14:01:05 crc kubenswrapper[4769]: I0122 14:01:05.530427 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24cb-account-create-update-rtdf4" event={"ID":"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e","Type":"ContainerDied","Data":"9adc3b6e5ed26c0015ab034169ba62530ada71abb392698e2ee878b4e52729c9"} Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:05.999704 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.066768 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") pod \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.067017 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") pod \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\" (UID: \"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0\") " Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.068782 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" (UID: "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.076738 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j" (OuterVolumeSpecName: "kube-api-access-ldw6j") pod "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" (UID: "ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0"). InnerVolumeSpecName "kube-api-access-ldw6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.168463 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldw6j\" (UniqueName: \"kubernetes.io/projected/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-kube-api-access-ldw6j\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.168498 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.540467 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"a768ce4cac51de83b2e0e35b63af262ccfcd78325665e2b1c6145183f8c4b7fc"} Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.543897 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7r9tp" event={"ID":"ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0","Type":"ContainerDied","Data":"53daaafba2df179129fbaee7564a7dbb0810bedc12982841f6922ce3e0b0c0bc"} Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.543949 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53daaafba2df179129fbaee7564a7dbb0810bedc12982841f6922ce3e0b0c0bc" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.543964 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7r9tp" Jan 22 14:01:06 crc kubenswrapper[4769]: I0122 14:01:06.998448 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.082763 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") pod \"3d72603e-a10a-4490-8298-67db64d087fc\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.083115 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") pod \"3d72603e-a10a-4490-8298-67db64d087fc\" (UID: \"3d72603e-a10a-4490-8298-67db64d087fc\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.083724 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3d72603e-a10a-4490-8298-67db64d087fc" (UID: "3d72603e-a10a-4490-8298-67db64d087fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.088035 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24" (OuterVolumeSpecName: "kube-api-access-bdv24") pod "3d72603e-a10a-4490-8298-67db64d087fc" (UID: "3d72603e-a10a-4490-8298-67db64d087fc"). InnerVolumeSpecName "kube-api-access-bdv24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.177815 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.185440 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdv24\" (UniqueName: \"kubernetes.io/projected/3d72603e-a10a-4490-8298-67db64d087fc-kube-api-access-bdv24\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.185477 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3d72603e-a10a-4490-8298-67db64d087fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.186911 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") pod \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") pod \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286808 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") pod \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\" (UID: \"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.286832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") pod \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\" (UID: \"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9\") " Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287399 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" (UID: "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287466 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" (UID: "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287850 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.287871 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.294075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg" (OuterVolumeSpecName: "kube-api-access-45ccg") pod "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" (UID: "ad0702a4-ee8a-45da-9cb7-40c2e4b257b9"). InnerVolumeSpecName "kube-api-access-45ccg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.304056 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb" (OuterVolumeSpecName: "kube-api-access-f2wnb") pod "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" (UID: "cb68cb3e-c079-4e87-ae9b-be93a2b8b80e"). InnerVolumeSpecName "kube-api-access-f2wnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.389561 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2wnb\" (UniqueName: \"kubernetes.io/projected/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e-kube-api-access-f2wnb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.389602 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45ccg\" (UniqueName: \"kubernetes.io/projected/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9-kube-api-access-45ccg\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.556828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24cb-account-create-update-rtdf4" event={"ID":"cb68cb3e-c079-4e87-ae9b-be93a2b8b80e","Type":"ContainerDied","Data":"7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8"} Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.556867 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d34d84588f950f10863a3d8b771247ec7e6196fa9aab76b092308a4474630c8" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.556940 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24cb-account-create-update-rtdf4" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.570244 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-5nx2t" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.570227 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-5nx2t" event={"ID":"3d72603e-a10a-4490-8298-67db64d087fc","Type":"ContainerDied","Data":"1398047490e7ad774844fcdd21d36eeaa7ef1a8b0e137e6b1405961ab26a58b1"} Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.570682 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1398047490e7ad774844fcdd21d36eeaa7ef1a8b0e137e6b1405961ab26a58b1" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.574284 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-892lk" event={"ID":"ad0702a4-ee8a-45da-9cb7-40c2e4b257b9","Type":"ContainerDied","Data":"922a37c04813d1f740b1b1fafb93a43831f287f7e26c6b8164075378950823fd"} Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.574327 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="922a37c04813d1f740b1b1fafb93a43831f287f7e26c6b8164075378950823fd" Jan 22 14:01:07 crc kubenswrapper[4769]: I0122 14:01:07.574338 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-892lk" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.014831 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.042730 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.138934 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") pod \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.138997 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") pod \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") pod \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\" (UID: \"51e2f7fd-cd2e-4a84-b62a-27915d32469c\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139095 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") pod \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\" (UID: \"ec90402f-c994-4710-b82f-5c8cc3f12fdf\") " Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139601 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51e2f7fd-cd2e-4a84-b62a-27915d32469c" (UID: "51e2f7fd-cd2e-4a84-b62a-27915d32469c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.139966 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec90402f-c994-4710-b82f-5c8cc3f12fdf" (UID: "ec90402f-c994-4710-b82f-5c8cc3f12fdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.143328 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg" (OuterVolumeSpecName: "kube-api-access-hs9rg") pod "ec90402f-c994-4710-b82f-5c8cc3f12fdf" (UID: "ec90402f-c994-4710-b82f-5c8cc3f12fdf"). InnerVolumeSpecName "kube-api-access-hs9rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.144845 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx" (OuterVolumeSpecName: "kube-api-access-cl5gx") pod "51e2f7fd-cd2e-4a84-b62a-27915d32469c" (UID: "51e2f7fd-cd2e-4a84-b62a-27915d32469c"). InnerVolumeSpecName "kube-api-access-cl5gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240935 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs9rg\" (UniqueName: \"kubernetes.io/projected/ec90402f-c994-4710-b82f-5c8cc3f12fdf-kube-api-access-hs9rg\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240974 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl5gx\" (UniqueName: \"kubernetes.io/projected/51e2f7fd-cd2e-4a84-b62a-27915d32469c-kube-api-access-cl5gx\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240984 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51e2f7fd-cd2e-4a84-b62a-27915d32469c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.240992 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec90402f-c994-4710-b82f-5c8cc3f12fdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.599806 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"71fb5bc6f9e9c2c599e91ba4cb6564a5a859a950c3b343c6218824a3ce16549a"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.601842 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8bb3-account-create-update-x6jhs" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.601842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8bb3-account-create-update-x6jhs" event={"ID":"ec90402f-c994-4710-b82f-5c8cc3f12fdf","Type":"ContainerDied","Data":"2526f6d6abe9ddf1def4e75e6755fa98fa5b8f9ceae123095b211a7facde003a"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.602106 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2526f6d6abe9ddf1def4e75e6755fa98fa5b8f9ceae123095b211a7facde003a" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.604821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-8372-account-create-update-lq4fn" event={"ID":"51e2f7fd-cd2e-4a84-b62a-27915d32469c","Type":"ContainerDied","Data":"d679f95f173487e55b7459bd3fc7f4540a679004c865d7f6767595d3d679ed77"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.604986 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d679f95f173487e55b7459bd3fc7f4540a679004c865d7f6767595d3d679ed77" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.605065 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-8372-account-create-update-lq4fn" Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.611365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerStarted","Data":"3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd"} Jan 22 14:01:10 crc kubenswrapper[4769]: I0122 14:01:10.632407 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r7c9w" podStartSLOduration=2.188598078 podStartE2EDuration="8.632391781s" podCreationTimestamp="2026-01-22 14:01:02 +0000 UTC" firstStartedPulling="2026-01-22 14:01:03.458711386 +0000 UTC m=+1042.869821325" lastFinishedPulling="2026-01-22 14:01:09.902505099 +0000 UTC m=+1049.313615028" observedRunningTime="2026-01-22 14:01:10.624918907 +0000 UTC m=+1050.036028836" watchObservedRunningTime="2026-01-22 14:01:10.632391781 +0000 UTC m=+1050.043501710" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.626008 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"b8093b148307c549128a45be1e93e4639e7ec527913f9314337ea0c5f3334a00"} Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.626409 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"8e03e577b39ab439637acb4ad818d5d0a4150b3aa00c5025409ee33d5361ebe5"} Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.626429 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ce65dba3-22b9-482f-b3da-2f4705468ea4","Type":"ContainerStarted","Data":"1e6a6bed31e512a165b094eda0f928465082927b9b175261d080013cdbf2e8bc"} Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.665746 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=24.878220274 podStartE2EDuration="44.66572537s" podCreationTimestamp="2026-01-22 14:00:27 +0000 UTC" firstStartedPulling="2026-01-22 14:00:44.765178014 +0000 UTC m=+1024.176287943" lastFinishedPulling="2026-01-22 14:01:04.55268311 +0000 UTC m=+1043.963793039" observedRunningTime="2026-01-22 14:01:11.65950431 +0000 UTC m=+1051.070614249" watchObservedRunningTime="2026-01-22 14:01:11.66572537 +0000 UTC m=+1051.076835299" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.951910 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952591 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952611 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952628 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952634 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952645 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d72603e-a10a-4490-8298-67db64d087fc" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952651 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d72603e-a10a-4490-8298-67db64d087fc" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952664 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952670 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952690 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952695 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: E0122 14:01:11.952704 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952711 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952867 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952879 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d72603e-a10a-4490-8298-67db64d087fc" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952889 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952898 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" containerName="mariadb-database-create" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952908 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.952917 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" containerName="mariadb-account-create-update" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.953721 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.956310 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 22 14:01:11 crc kubenswrapper[4769]: I0122 14:01:11.965959 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081300 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081374 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081549 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081881 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.081937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184013 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184225 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184297 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.184327 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185184 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185349 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.185491 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.211905 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"dnsmasq-dns-77585f5f8c-xsb4l\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.272759 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:12 crc kubenswrapper[4769]: I0122 14:01:12.742145 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:12 crc kubenswrapper[4769]: W0122 14:01:12.754233 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83064db_7f62_4af5_9747_89e9054b3a16.slice/crio-dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406 WatchSource:0}: Error finding container dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406: Status 404 returned error can't find the container with id dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406 Jan 22 14:01:13 crc kubenswrapper[4769]: I0122 14:01:13.644355 4769 generic.go:334] "Generic (PLEG): container finished" podID="d83064db-7f62-4af5-9747-89e9054b3a16" containerID="9774b2b75f642e7815cf529b073ae431051a8ec6d35e8b2a86b691abcc256a58" exitCode=0 Jan 22 14:01:13 crc kubenswrapper[4769]: I0122 14:01:13.644515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerDied","Data":"9774b2b75f642e7815cf529b073ae431051a8ec6d35e8b2a86b691abcc256a58"} Jan 22 14:01:13 crc kubenswrapper[4769]: I0122 14:01:13.644763 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerStarted","Data":"dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406"} Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.657214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerStarted","Data":"c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4"} Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.657687 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.659780 4769 generic.go:334] "Generic (PLEG): container finished" podID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerID="3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd" exitCode=0 Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.659879 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerDied","Data":"3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd"} Jan 22 14:01:14 crc kubenswrapper[4769]: I0122 14:01:14.688351 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" podStartSLOduration=3.688317317 podStartE2EDuration="3.688317317s" podCreationTimestamp="2026-01-22 14:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:14.674821261 +0000 UTC m=+1054.085931190" watchObservedRunningTime="2026-01-22 14:01:14.688317317 +0000 UTC m=+1054.099427316" Jan 22 14:01:15 crc kubenswrapper[4769]: E0122 14:01:15.728447 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4b4ca8a_8b9e_48d2_9208_fecb2bc9a299.slice/crio-61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4.scope\": RecentStats: unable to find data in memory cache]" Jan 22 14:01:15 crc kubenswrapper[4769]: I0122 14:01:15.980906 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.049144 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") pod \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.049290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") pod \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.049358 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") pod \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\" (UID: \"275c0c66-cbd1-4469-81f6-c33a1eab0ed6\") " Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.054746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t" (OuterVolumeSpecName: "kube-api-access-6ld4t") pod "275c0c66-cbd1-4469-81f6-c33a1eab0ed6" (UID: "275c0c66-cbd1-4469-81f6-c33a1eab0ed6"). InnerVolumeSpecName "kube-api-access-6ld4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.073882 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "275c0c66-cbd1-4469-81f6-c33a1eab0ed6" (UID: "275c0c66-cbd1-4469-81f6-c33a1eab0ed6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.111900 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data" (OuterVolumeSpecName: "config-data") pod "275c0c66-cbd1-4469-81f6-c33a1eab0ed6" (UID: "275c0c66-cbd1-4469-81f6-c33a1eab0ed6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.151512 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.151545 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.151556 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ld4t\" (UniqueName: \"kubernetes.io/projected/275c0c66-cbd1-4469-81f6-c33a1eab0ed6-kube-api-access-6ld4t\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.679462 4769 generic.go:334] "Generic (PLEG): container finished" podID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerID="61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4" exitCode=0 Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.679527 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerDied","Data":"61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4"} Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.681283 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r7c9w" event={"ID":"275c0c66-cbd1-4469-81f6-c33a1eab0ed6","Type":"ContainerDied","Data":"92baa55a546dc1edc3b0176ea083063e122cca726bb4af4e4e8f8b15d0ee43c7"} Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.681308 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92baa55a546dc1edc3b0176ea083063e122cca726bb4af4e4e8f8b15d0ee43c7" Jan 22 14:01:16 crc kubenswrapper[4769]: I0122 14:01:16.681364 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r7c9w" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.007315 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:17 crc kubenswrapper[4769]: E0122 14:01:17.012298 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerName="keystone-db-sync" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.012564 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerName="keystone-db-sync" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.012895 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" containerName="keystone-db-sync" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.013657 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.019836 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020307 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020309 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020634 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.020936 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.032863 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.033165 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" containerID="cri-o://c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4" gracePeriod=10 Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.046986 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.064886 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.064971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.064998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.065064 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.065105 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.065144 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.141696 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.162935 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.166996 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167082 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167159 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.167186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.171712 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.172039 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.172174 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.173377 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.173842 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.237807 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"keystone-bootstrap-wdqr9\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.264936 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.272752 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304078 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304109 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304147 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304275 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.304340 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.315120 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.317261 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.317525 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jpqbp" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.322429 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.322683 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.323580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.335640 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.340242 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350204 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350328 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350491 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.350682 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7p5j2" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.412865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413683 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413735 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413754 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413843 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413866 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413909 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413931 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413950 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413971 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.413993 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.414016 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.414035 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422146 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422239 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.422504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.463000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"dnsmasq-dns-55fff446b9-h5gf8\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.468883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.469985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.474298 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.474479 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.474493 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m6vjl" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.490645 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.492725 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.502016 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.506642 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.511089 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.516998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517077 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517175 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517245 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517275 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517300 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517336 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.517393 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.518045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.518936 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.521749 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.526998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.537721 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.541243 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.547484 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.571170 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.574844 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.576883 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qkkxv" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.581004 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"neutron-db-sync-rqjpw\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.586822 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.600210 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"horizon-89bdb59-vr94p\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.615577 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619264 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619290 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619317 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619343 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619370 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619390 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619469 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619533 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.619608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.635176 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.636681 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.641263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.644860 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.667575 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.687945 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.694627 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dx89d" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.695331 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.696980 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.703821 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734805 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734825 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734846 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734871 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734886 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734904 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734919 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734966 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.734993 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735022 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735037 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735054 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735073 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735112 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735133 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735156 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735173 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735191 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735216 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735239 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735254 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.735273 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.738448 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.739260 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.739675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.739934 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.741304 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.742618 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.746110 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.749641 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.752585 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.764637 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.764945 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.767701 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.772820 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.773857 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.780003 4769 generic.go:334] "Generic (PLEG): container finished" podID="d83064db-7f62-4af5-9747-89e9054b3a16" containerID="c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4" exitCode=0 Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.780391 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerDied","Data":"c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4"} Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.801380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"cinder-db-sync-l4hnw\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.818655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.819148 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"barbican-db-sync-zzjpd\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.841073 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.842520 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844393 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844419 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844500 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844578 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844691 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.844839 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.859171 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.859693 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.859922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.863925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.864288 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.865178 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.868634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.872742 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"ceilometer-0\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.873128 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.873238 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.875241 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.906941 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"horizon-5c66f6f78c-g92qm\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.907330 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.931914 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"placement-db-sync-bjdj8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.953680 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.969922 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.969983 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970010 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970076 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970098 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:17 crc kubenswrapper[4769]: I0122 14:01:17.970156 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.012307 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.042651 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071860 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071952 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071974 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.071990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.072039 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.072057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.073161 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.073712 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.074528 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.075265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.076587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.096708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"dnsmasq-dns-76fcf4b695-x8v8z\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.126981 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.208289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.543863 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:18 crc kubenswrapper[4769]: W0122 14:01:18.558023 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a7a7218_57a6_4091_9bd0_568fda3122fd.slice/crio-4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e WatchSource:0}: Error finding container 4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e: Status 404 returned error can't find the container with id 4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.588822 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.592343 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.684138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685112 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685151 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685249 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685320 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685397 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685455 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") pod \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\" (UID: \"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.685557 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") pod \"d83064db-7f62-4af5-9747-89e9054b3a16\" (UID: \"d83064db-7f62-4af5-9747-89e9054b3a16\") " Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.696274 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.705212 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk" (OuterVolumeSpecName: "kube-api-access-bf9rk") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "kube-api-access-bf9rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.707642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.732338 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv" (OuterVolumeSpecName: "kube-api-access-2g8wv") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "kube-api-access-2g8wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.747303 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.787409 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g8wv\" (UniqueName: \"kubernetes.io/projected/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-kube-api-access-2g8wv\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.787432 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf9rk\" (UniqueName: \"kubernetes.io/projected/d83064db-7f62-4af5-9747-89e9054b3a16-kube-api-access-bf9rk\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.787444 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.788963 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.813149 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.814045 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerStarted","Data":"df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.814083 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerStarted","Data":"6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.815353 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t9sxw" event={"ID":"b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299","Type":"ContainerDied","Data":"a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.815377 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a42428629277f66933eacee3971f3f2723dc11f98515a43b6d67b24d1023bea8" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.815381 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t9sxw" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.818456 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"21b21bef7c85b718cfdbb016fe626efbd1ab870c4b734875a383413b1b9ca2cc"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822282 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" event={"ID":"d83064db-7f62-4af5-9747-89e9054b3a16","Type":"ContainerDied","Data":"dbb6500f9b05697e49983b947e554cc120f56667f136da516b67517e34882406"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822313 4769 scope.go:117] "RemoveContainer" containerID="c113fcdaeea4262d86857b16ac35b7758e49c93e4706f03d96c76c2d8565a5e4" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822333 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-xsb4l" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.822578 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data" (OuterVolumeSpecName: "config-data") pod "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" (UID: "b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.827273 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89bdb59-vr94p" event={"ID":"5c4b43cf-c766-4b56-a016-a3f2d26656a1","Type":"ContainerStarted","Data":"1d75749a17b6133af8d4548979dade04116fbb2ac5e6040ef99419c36e560e9d"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.831961 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" event={"ID":"0a7a7218-57a6-4091-9bd0-568fda3122fd","Type":"ContainerStarted","Data":"4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e"} Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.838599 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wdqr9" podStartSLOduration=2.838575401 podStartE2EDuration="2.838575401s" podCreationTimestamp="2026-01-22 14:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:18.831563261 +0000 UTC m=+1058.242673190" watchObservedRunningTime="2026-01-22 14:01:18.838575401 +0000 UTC m=+1058.249685330" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.880712 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.895545 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.895582 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.895596 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.899951 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.916401 4769 scope.go:117] "RemoveContainer" containerID="9774b2b75f642e7815cf529b073ae431051a8ec6d35e8b2a86b691abcc256a58" Jan 22 14:01:18 crc kubenswrapper[4769]: W0122 14:01:18.937217 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eb8819f_512d_43d8_a59e_1ba8e7e1fb06.slice/crio-81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6 WatchSource:0}: Error finding container 81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6: Status 404 returned error can't find the container with id 81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6 Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.937834 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.938664 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.938703 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.941462 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config" (OuterVolumeSpecName: "config") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.966220 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.980697 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:01:18 crc kubenswrapper[4769]: I0122 14:01:18.999038 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d83064db-7f62-4af5-9747-89e9054b3a16" (UID: "d83064db-7f62-4af5-9747-89e9054b3a16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.001894 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.001927 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.001939 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d83064db-7f62-4af5-9747-89e9054b3a16-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.055211 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:19 crc kubenswrapper[4769]: W0122 14:01:19.073163 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf79e78c3_4c98_41e2_be1e_19d794ed1c17.slice/crio-d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d WatchSource:0}: Error finding container d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d: Status 404 returned error can't find the container with id d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.098354 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.192368 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.289701 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:01:19 crc kubenswrapper[4769]: E0122 14:01:19.290123 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerName="glance-db-sync" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290142 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerName="glance-db-sync" Jan 22 14:01:19 crc kubenswrapper[4769]: E0122 14:01:19.290163 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290168 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" Jan 22 14:01:19 crc kubenswrapper[4769]: E0122 14:01:19.290183 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="init" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290189 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="init" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290349 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" containerName="glance-db-sync" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.290366 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" containerName="dnsmasq-dns" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.291209 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.307091 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.392097 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.417809 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-xsb4l"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431469 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431569 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431598 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431619 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431640 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.431757 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.532921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.532977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533082 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533942 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.533915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.534427 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.534524 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.549224 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.594592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"dnsmasq-dns-8b5c85b87-8bcps\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.629653 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.631858 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.660487 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-khhk4" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.661020 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.676263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.678914 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.691854 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.699099 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.700745 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.703507 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775580 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775641 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775710 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775731 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775746 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775764 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.775780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.853810 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878706 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878808 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878838 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878914 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878948 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878968 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.878990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.879010 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.879045 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.882069 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.883558 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.885244 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.889700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899237 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899405 4769 generic.go:334] "Generic (PLEG): container finished" podID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerID="eee63cea153f84f7bdefbf41b826f8e50ee41200112ad207069eaf7592e1b871" exitCode=0 Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899601 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899627 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" event={"ID":"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd","Type":"ContainerDied","Data":"eee63cea153f84f7bdefbf41b826f8e50ee41200112ad207069eaf7592e1b871"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899649 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" event={"ID":"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd","Type":"ContainerStarted","Data":"dd163787184b799a47be1dc4a764a72ea38c1e55f5f24860611abd6e7a863477"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.899947 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.901538 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.907626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.909661 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.929810 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.930365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerStarted","Data":"81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.955396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerStarted","Data":"3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.955443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerStarted","Data":"f5f34c732ee37b95ec899f49855f9cce53d55317437fe6fd87284898a608994d"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.968083 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.975380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerStarted","Data":"d9766e548e18d10e2948ccf9973b496ef374cc1f1a4772a78ff7fa96b507f7e2"} Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.983954 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984162 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984307 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984377 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984480 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.984627 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.985425 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.998237 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.998313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.998866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.999146 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.999486 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:19 crc kubenswrapper[4769]: I0122 14:01:19.999779 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.004974 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerStarted","Data":"db6d489e657294f84dd39f03818355418206b6b45168e98d6d149865405021b3"} Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.014185 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c66f6f78c-g92qm" event={"ID":"f79e78c3-4c98-41e2-be1e-19d794ed1c17","Type":"ContainerStarted","Data":"d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d"} Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.030447 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerID="5b046e6375a09633251daeedb629c23f7e50d18e24421e4669f77c9e865c9999" exitCode=0 Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.031352 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" event={"ID":"0a7a7218-57a6-4091-9bd0-568fda3122fd","Type":"ContainerDied","Data":"5b046e6375a09633251daeedb629c23f7e50d18e24421e4669f77c9e865c9999"} Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.050402 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.052276 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rqjpw" podStartSLOduration=3.05225483 podStartE2EDuration="3.05225483s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:20.031507788 +0000 UTC m=+1059.442617717" watchObservedRunningTime="2026-01-22 14:01:20.05225483 +0000 UTC m=+1059.463364759" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.055515 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"horizon-88b8d5fbf-mdp8d\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.104781 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105112 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105268 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.105320 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.106123 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.106737 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.107958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.109930 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.116292 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.136320 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.138374 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.168729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.188504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.218414 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:01:20 crc kubenswrapper[4769]: I0122 14:01:20.533393 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.621866 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622051 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622122 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622274 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622344 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.622390 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") pod \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\" (UID: \"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.637259 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp" (OuterVolumeSpecName: "kube-api-access-wf4pp") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "kube-api-access-wf4pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.655477 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.689464 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.702684 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.710471 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config" (OuterVolumeSpecName: "config") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.711305 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" (UID: "3c9c86b4-dc88-4cbe-82e1-40198f4b39cd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726034 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726321 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726331 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726344 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726354 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.726363 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf4pp\" (UniqueName: \"kubernetes.io/projected/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd-kube-api-access-wf4pp\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.751731 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.820712 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827447 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827633 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827910 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.827969 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.828055 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.828102 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") pod \"0a7a7218-57a6-4091-9bd0-568fda3122fd\" (UID: \"0a7a7218-57a6-4091-9bd0-568fda3122fd\") " Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.851640 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.852540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr" (OuterVolumeSpecName: "kube-api-access-7cpcr") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "kube-api-access-7cpcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.862305 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.870381 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.870553 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.875098 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config" (OuterVolumeSpecName: "config") pod "0a7a7218-57a6-4091-9bd0-568fda3122fd" (UID: "0a7a7218-57a6-4091-9bd0-568fda3122fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.924579 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83064db-7f62-4af5-9747-89e9054b3a16" path="/var/lib/kubelet/pods/d83064db-7f62-4af5-9747-89e9054b3a16/volumes" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.939928 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940019 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940044 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940066 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940081 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cpcr\" (UniqueName: \"kubernetes.io/projected/0a7a7218-57a6-4091-9bd0-568fda3122fd-kube-api-access-7cpcr\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:20.940093 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a7a7218-57a6-4091-9bd0-568fda3122fd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.052591 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" event={"ID":"0a7a7218-57a6-4091-9bd0-568fda3122fd","Type":"ContainerDied","Data":"4eb18875b161af5978dbb74b8a3a3ee948221ffd789ab96d7eb64d0341675e2e"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.052645 4769 scope.go:117] "RemoveContainer" containerID="5b046e6375a09633251daeedb629c23f7e50d18e24421e4669f77c9e865c9999" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.052769 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-h5gf8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.069932 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.070111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76fcf4b695-x8v8z" event={"ID":"3c9c86b4-dc88-4cbe-82e1-40198f4b39cd","Type":"ContainerDied","Data":"dd163787184b799a47be1dc4a764a72ea38c1e55f5f24860611abd6e7a863477"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.074416 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerStarted","Data":"dbba61067789f8e4b68dedf1066a578d68118546758df6cfdb39ad7d7ae20588"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.118982 4769 scope.go:117] "RemoveContainer" containerID="eee63cea153f84f7bdefbf41b826f8e50ee41200112ad207069eaf7592e1b871" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.179757 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.186702 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76fcf4b695-x8v8z"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.233856 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:21.239648 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-h5gf8"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:22.899189 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" path="/var/lib/kubelet/pods/0a7a7218-57a6-4091-9bd0-568fda3122fd/volumes" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:22.900120 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" path="/var/lib/kubelet/pods/3c9c86b4-dc88-4cbe-82e1-40198f4b39cd/volumes" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:23.099944 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerStarted","Data":"f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451"} Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:29.772626 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:29.847342 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.104639 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.130926 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:30.131503 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131525 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:30.131575 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131581 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131838 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7a7218-57a6-4091-9bd0-568fda3122fd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.131875 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c9c86b4-dc88-4cbe-82e1-40198f4b39cd" containerName="init" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.133075 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.135180 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.194138 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.222382 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.245903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.245963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246083 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246129 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.246356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.253979 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cc4c8d8bd-69kmb"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.255552 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.270944 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cc4c8d8bd-69kmb"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-tls-certs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348436 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-logs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348513 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348546 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-config-data\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-scripts\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348752 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348821 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-secret-key\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348887 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq6l6\" (UniqueName: \"kubernetes.io/projected/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-kube-api-access-bq6l6\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348923 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.348965 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.349019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.349046 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-combined-ca-bundle\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.350308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.350869 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.351026 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.355627 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.360316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.360659 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.366440 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"horizon-6464b9bcc6-tjgjv\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.450700 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq6l6\" (UniqueName: \"kubernetes.io/projected/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-kube-api-access-bq6l6\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451092 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-combined-ca-bundle\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-tls-certs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451239 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-logs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451282 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-config-data\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-scripts\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451376 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-secret-key\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.451736 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-logs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.452342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-scripts\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.453091 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-config-data\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.459126 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.460238 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-combined-ca-bundle\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.461761 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-tls-certs\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.461988 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-horizon-secret-key\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.464739 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq6l6\" (UniqueName: \"kubernetes.io/projected/9a6a04bb-fa49-41f8-b75b-9c27873f8a1f-kube-api-access-bq6l6\") pod \"horizon-7cc4c8d8bd-69kmb\" (UID: \"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f\") " pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.508647 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pkl6g" podUID="ed1198a5-a7fa-4ab4-9656-8e9700deec37" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:30.580517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:31.193372 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerStarted","Data":"c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a"} Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.029668 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.030192 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g78xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-bjdj8_openstack(a0e92228-1a9b-49fc-9dfd-0493f70f5ee8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.031412 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-bjdj8" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.226471 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" containerID="cri-o://f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451" gracePeriod=30 Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.226851 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" containerID="cri-o://c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a" gracePeriod=30 Jan 22 14:01:33 crc kubenswrapper[4769]: E0122 14:01:33.228553 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-bjdj8" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.250688 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.250670384 podStartE2EDuration="14.250670384s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:33.249385599 +0000 UTC m=+1072.660495528" watchObservedRunningTime="2026-01-22 14:01:33.250670384 +0000 UTC m=+1072.661780313" Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.364742 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.373182 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:01:33 crc kubenswrapper[4769]: I0122 14:01:33.596673 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238122 4769 generic.go:334] "Generic (PLEG): container finished" podID="84850145-89ac-4660-8a13-6abde9509589" containerID="c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a" exitCode=0 Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238149 4769 generic.go:334] "Generic (PLEG): container finished" podID="84850145-89ac-4660-8a13-6abde9509589" containerID="f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451" exitCode=143 Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238166 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerDied","Data":"c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a"} Jan 22 14:01:34 crc kubenswrapper[4769]: I0122 14:01:34.238190 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerDied","Data":"f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451"} Jan 22 14:01:43 crc kubenswrapper[4769]: E0122 14:01:43.194166 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 22 14:01:43 crc kubenswrapper[4769]: E0122 14:01:43.194926 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b5h5h68bh59bh5d6h5f5h57bhfch58ch546h54ch5dhd6h67dh84h596h84h565h677h597h649h54bh69h68fh7fhcbh5c6h685hdfh656h64h55q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jb9wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-89bdb59-vr94p_openstack(5c4b43cf-c766-4b56-a016-a3f2d26656a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:43 crc kubenswrapper[4769]: E0122 14:01:43.199011 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-89bdb59-vr94p" podUID="5c4b43cf-c766-4b56-a016-a3f2d26656a1" Jan 22 14:01:46 crc kubenswrapper[4769]: I0122 14:01:46.331042 4769 generic.go:334] "Generic (PLEG): container finished" podID="77ac558e-a319-4c27-9869-fee6f85736e5" containerID="df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44" exitCode=0 Jan 22 14:01:46 crc kubenswrapper[4769]: I0122 14:01:46.331556 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerDied","Data":"df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44"} Jan 22 14:01:50 crc kubenswrapper[4769]: I0122 14:01:50.051695 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:01:50 crc kubenswrapper[4769]: I0122 14:01:50.052406 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.424654 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerStarted","Data":"de08ee3bddd1437f1405dc62dcd35ee86837e2196876742c81be83ac8aaa6642"} Jan 22 14:01:55 crc kubenswrapper[4769]: W0122 14:01:55.455734 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20251361_dc9f_403b_bffa_2a52a61e1bf4.slice/crio-64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f WatchSource:0}: Error finding container 64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f: Status 404 returned error can't find the container with id 64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.554992 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.563280 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628500 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628542 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628597 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628638 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628662 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628679 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") pod \"77ac558e-a319-4c27-9869-fee6f85736e5\" (UID: \"77ac558e-a319-4c27-9869-fee6f85736e5\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.628752 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.629469 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.629506 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") pod \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\" (UID: \"5c4b43cf-c766-4b56-a016-a3f2d26656a1\") " Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.629997 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs" (OuterVolumeSpecName: "logs") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts" (OuterVolumeSpecName: "scripts") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630233 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data" (OuterVolumeSpecName: "config-data") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630899 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630918 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c4b43cf-c766-4b56-a016-a3f2d26656a1-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.630929 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c4b43cf-c766-4b56-a016-a3f2d26656a1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633100 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts" (OuterVolumeSpecName: "scripts") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633567 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633646 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb" (OuterVolumeSpecName: "kube-api-access-f9vbb") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "kube-api-access-f9vbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.633989 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.636006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc" (OuterVolumeSpecName: "kube-api-access-jb9wc") pod "5c4b43cf-c766-4b56-a016-a3f2d26656a1" (UID: "5c4b43cf-c766-4b56-a016-a3f2d26656a1"). InnerVolumeSpecName "kube-api-access-jb9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.638377 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.652241 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.656773 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data" (OuterVolumeSpecName: "config-data") pod "77ac558e-a319-4c27-9869-fee6f85736e5" (UID: "77ac558e-a319-4c27-9869-fee6f85736e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733023 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733314 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c4b43cf-c766-4b56-a016-a3f2d26656a1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733327 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733335 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb9wc\" (UniqueName: \"kubernetes.io/projected/5c4b43cf-c766-4b56-a016-a3f2d26656a1-kube-api-access-jb9wc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733344 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733351 4769 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733359 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77ac558e-a319-4c27-9869-fee6f85736e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: I0122 14:01:55.733370 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9vbb\" (UniqueName: \"kubernetes.io/projected/77ac558e-a319-4c27-9869-fee6f85736e5-kube-api-access-f9vbb\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:55 crc kubenswrapper[4769]: E0122 14:01:55.880395 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 22 14:01:55 crc kubenswrapper[4769]: E0122 14:01:55.880607 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n574h66dh588h5b8h655h54fh98h64bh555h64fh545h548hd6h676hd9h5ffh5f4h6fh656h56fh69h85h654h599h58bh8fh86h5ffhb8h7fh56bhffq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfrwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5c66f6f78c-g92qm_openstack(f79e78c3-4c98-41e2-be1e-19d794ed1c17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:55 crc kubenswrapper[4769]: E0122 14:01:55.884299 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5c66f6f78c-g92qm" podUID="f79e78c3-4c98-41e2-be1e-19d794ed1c17" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.445264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89bdb59-vr94p" event={"ID":"5c4b43cf-c766-4b56-a016-a3f2d26656a1","Type":"ContainerDied","Data":"1d75749a17b6133af8d4548979dade04116fbb2ac5e6040ef99419c36e560e9d"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.445291 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89bdb59-vr94p" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.453504 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerStarted","Data":"64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.458742 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wdqr9" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.458752 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wdqr9" event={"ID":"77ac558e-a319-4c27-9869-fee6f85736e5","Type":"ContainerDied","Data":"6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.458965 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.460277 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerStarted","Data":"054e89b41fe504baa24efa6fdc5ef87502ed22b3b42e8052873a0df4c426e7ed"} Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.527936 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.534185 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-89bdb59-vr94p"] Jan 22 14:01:56 crc kubenswrapper[4769]: E0122 14:01:56.641769 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c4b43cf_c766_4b56_a016_a3f2d26656a1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ac558e_a319_4c27_9869_fee6f85736e5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77ac558e_a319_4c27_9869_fee6f85736e5.slice/crio-6ef39fb051bbbb437f666b731505375e45c29b3f70e4b2350cee07e7caf59e41\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c4b43cf_c766_4b56_a016_a3f2d26656a1.slice/crio-1d75749a17b6133af8d4548979dade04116fbb2ac5e6040ef99419c36e560e9d\": RecentStats: unable to find data in memory cache]" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.682429 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.696707 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wdqr9"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.764343 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:01:56 crc kubenswrapper[4769]: E0122 14:01:56.764739 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" containerName="keystone-bootstrap" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.764762 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" containerName="keystone-bootstrap" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.764987 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" containerName="keystone-bootstrap" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.765605 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.768068 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.768068 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.768397 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.769557 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.769575 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.778854 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.861770 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862020 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862138 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862166 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862240 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.862261 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.894968 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4b43cf-c766-4b56-a016-a3f2d26656a1" path="/var/lib/kubelet/pods/5c4b43cf-c766-4b56-a016-a3f2d26656a1/volumes" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.895409 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77ac558e-a319-4c27-9869-fee6f85736e5" path="/var/lib/kubelet/pods/77ac558e-a319-4c27-9869-fee6f85736e5/volumes" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966651 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966867 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966896 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.966960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.972989 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.973182 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.973866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.974355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.974660 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:56 crc kubenswrapper[4769]: I0122 14:01:56.991768 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"keystone-bootstrap-nv6tp\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:57 crc kubenswrapper[4769]: I0122 14:01:57.088251 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.162677 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.162934 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-l4hnw_openstack(3eb8819f-512d-43d8-a59e-1ba8e7e1fb06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.164350 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-l4hnw" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.434703 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.435267 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5d9hbh6hffhc9h58bh668hc4hddhfdh5cbh677h567hf5h688h544h5f7hc5h65bh54fhdfh58fhf8h8bhcbh595h57ch56ch66hf9h55bh55dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnkhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7464458e-c450-4b87-80d6-30abeb62e9d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.473025 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-l4hnw" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.853582 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.853792 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pbsw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-zzjpd_openstack(a7f766e1-262c-4861-a117-2454631e284f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 14:01:57 crc kubenswrapper[4769]: E0122 14:01:57.855009 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-zzjpd" podUID="a7f766e1-262c-4861-a117-2454631e284f" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.010437 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.011891 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087415 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087473 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087512 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087670 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087702 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087774 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087916 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") pod \"84850145-89ac-4660-8a13-6abde9509589\" (UID: \"84850145-89ac-4660-8a13-6abde9509589\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.087944 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") pod \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\" (UID: \"f79e78c3-4c98-41e2-be1e-19d794ed1c17\") " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.088600 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs" (OuterVolumeSpecName: "logs") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.089126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts" (OuterVolumeSpecName: "scripts") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.089240 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs" (OuterVolumeSpecName: "logs") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.089746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data" (OuterVolumeSpecName: "config-data") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.090242 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.092681 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts" (OuterVolumeSpecName: "scripts") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.093481 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn" (OuterVolumeSpecName: "kube-api-access-wfrwn") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "kube-api-access-wfrwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.094149 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.094450 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt" (OuterVolumeSpecName: "kube-api-access-vvsdt") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "kube-api-access-vvsdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.094966 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f79e78c3-4c98-41e2-be1e-19d794ed1c17" (UID: "f79e78c3-4c98-41e2-be1e-19d794ed1c17"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.172425 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189881 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189919 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189934 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189949 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f79e78c3-4c98-41e2-be1e-19d794ed1c17-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189962 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvsdt\" (UniqueName: \"kubernetes.io/projected/84850145-89ac-4660-8a13-6abde9509589-kube-api-access-vvsdt\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.189994 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190009 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84850145-89ac-4660-8a13-6abde9509589-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190022 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190033 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfrwn\" (UniqueName: \"kubernetes.io/projected/f79e78c3-4c98-41e2-be1e-19d794ed1c17-kube-api-access-wfrwn\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190046 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f79e78c3-4c98-41e2-be1e-19d794ed1c17-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.190059 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f79e78c3-4c98-41e2-be1e-19d794ed1c17-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.211573 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data" (OuterVolumeSpecName: "config-data") pod "84850145-89ac-4660-8a13-6abde9509589" (UID: "84850145-89ac-4660-8a13-6abde9509589"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.218087 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.267110 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.291841 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.291873 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84850145-89ac-4660-8a13-6abde9509589-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.380252 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cc4c8d8bd-69kmb"] Jan 22 14:01:58 crc kubenswrapper[4769]: W0122 14:01:58.385489 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a6a04bb_fa49_41f8_b75b_9c27873f8a1f.slice/crio-69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776 WatchSource:0}: Error finding container 69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776: Status 404 returned error can't find the container with id 69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776 Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.479497 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cc4c8d8bd-69kmb" event={"ID":"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f","Type":"ContainerStarted","Data":"69b2f54964dafa4887d2100a41714d9572767b4c29fc6c3c4e428721442fb776"} Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.480438 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.481452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"84850145-89ac-4660-8a13-6abde9509589","Type":"ContainerDied","Data":"dbba61067789f8e4b68dedf1066a578d68118546758df6cfdb39ad7d7ae20588"} Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.481469 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.481492 4769 scope.go:117] "RemoveContainer" containerID="c238a5d975534ec018876b7571d6895f314000f24146c4017b29d9deb7a45c3a" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.486033 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerStarted","Data":"a21b69f798a23fdcfdfb92adcc62b30839c1be6a1c5c04d00a869ead5ddc22a7"} Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.488621 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c66f6f78c-g92qm" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.489133 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c66f6f78c-g92qm" event={"ID":"f79e78c3-4c98-41e2-be1e-19d794ed1c17","Type":"ContainerDied","Data":"d324a8923d4121e52b8f50a61c76fa823727fdd525010d41f8feff37a542e75d"} Jan 22 14:01:58 crc kubenswrapper[4769]: E0122 14:01:58.489697 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-zzjpd" podUID="a7f766e1-262c-4861-a117-2454631e284f" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.505066 4769 scope.go:117] "RemoveContainer" containerID="f3704eb4ce5b135ab7bee85bad1dffc4bc9ae3c908e85c1bad050b5ae696d451" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.534780 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.541735 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.574497 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: E0122 14:01:58.575161 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575182 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" Jan 22 14:01:58 crc kubenswrapper[4769]: E0122 14:01:58.575194 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575201 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575370 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-log" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.575395 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="84850145-89ac-4660-8a13-6abde9509589" containerName="glance-httpd" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.576346 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.579616 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.579648 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.594724 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.607702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608111 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608270 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608375 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.608744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.609453 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.609701 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.610030 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.651535 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c66f6f78c-g92qm"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.663852 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.716580 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.716660 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717207 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717859 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717909 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717929 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717963 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.717994 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.718040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.718457 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.720612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.721116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.721930 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.724812 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.725378 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.737700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.747071 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.895315 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.895605 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84850145-89ac-4660-8a13-6abde9509589" path="/var/lib/kubelet/pods/84850145-89ac-4660-8a13-6abde9509589/volumes" Jan 22 14:01:58 crc kubenswrapper[4769]: I0122 14:01:58.896404 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f79e78c3-4c98-41e2-be1e-19d794ed1c17" path="/var/lib/kubelet/pods/f79e78c3-4c98-41e2-be1e-19d794ed1c17/volumes" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.497553 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cc4c8d8bd-69kmb" event={"ID":"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f","Type":"ContainerStarted","Data":"871961a2674139b5e212b19135fb06e41841ece36cd09ff61777241cbffbea44"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.498102 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cc4c8d8bd-69kmb" event={"ID":"9a6a04bb-fa49-41f8-b75b-9c27873f8a1f","Type":"ContainerStarted","Data":"02baacda2925f01747731a4c29d0431e83e88dd74a623594d756d0f640e90a3d"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.499781 4769 generic.go:334] "Generic (PLEG): container finished" podID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" exitCode=0 Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.500493 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerDied","Data":"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.503401 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerStarted","Data":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.504828 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerStarted","Data":"4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.504862 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerStarted","Data":"01b2d0c9f44658986f8b11850550b9d2274d498a3edf3bf06e168e5ce6662ef9"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511253 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerStarted","Data":"75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerStarted","Data":"24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511423 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-88b8d5fbf-mdp8d" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" containerID="cri-o://24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c" gracePeriod=30 Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.511675 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-88b8d5fbf-mdp8d" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" containerID="cri-o://75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17" gracePeriod=30 Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.516952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerStarted","Data":"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.517007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerStarted","Data":"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.520315 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7cc4c8d8bd-69kmb" podStartSLOduration=29.520294686 podStartE2EDuration="29.520294686s" podCreationTimestamp="2026-01-22 14:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:59.51789424 +0000 UTC m=+1098.929004179" watchObservedRunningTime="2026-01-22 14:01:59.520294686 +0000 UTC m=+1098.931404615" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.521745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerStarted","Data":"7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c"} Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.578971 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nv6tp" podStartSLOduration=3.578945656 podStartE2EDuration="3.578945656s" podCreationTimestamp="2026-01-22 14:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:59.574338452 +0000 UTC m=+1098.985448391" watchObservedRunningTime="2026-01-22 14:01:59.578945656 +0000 UTC m=+1098.990055595" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.601760 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-88b8d5fbf-mdp8d" podStartSLOduration=38.000940544 podStartE2EDuration="40.601742374s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="2026-01-22 14:01:55.429960427 +0000 UTC m=+1094.841070356" lastFinishedPulling="2026-01-22 14:01:58.030762247 +0000 UTC m=+1097.441872186" observedRunningTime="2026-01-22 14:01:59.592106383 +0000 UTC m=+1099.003216312" watchObservedRunningTime="2026-01-22 14:01:59.601742374 +0000 UTC m=+1099.012852303" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.614386 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6464b9bcc6-tjgjv" podStartSLOduration=29.614363886 podStartE2EDuration="29.614363886s" podCreationTimestamp="2026-01-22 14:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:01:59.614097349 +0000 UTC m=+1099.025207278" watchObservedRunningTime="2026-01-22 14:01:59.614363886 +0000 UTC m=+1099.025473815" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.632102 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bjdj8" podStartSLOduration=3.6421529550000002 podStartE2EDuration="42.632082947s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="2026-01-22 14:01:19.048280728 +0000 UTC m=+1058.459390657" lastFinishedPulling="2026-01-22 14:01:58.03821072 +0000 UTC m=+1097.449320649" observedRunningTime="2026-01-22 14:01:59.627475892 +0000 UTC m=+1099.038585821" watchObservedRunningTime="2026-01-22 14:01:59.632082947 +0000 UTC m=+1099.043192876" Jan 22 14:01:59 crc kubenswrapper[4769]: I0122 14:01:59.680429 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:01:59 crc kubenswrapper[4769]: W0122 14:01:59.682542 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab0b9a4_13fb_42b5_be06_1231f96c4016.slice/crio-d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce WatchSource:0}: Error finding container d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce: Status 404 returned error can't find the container with id d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.118904 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.460169 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.460925 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.530888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerStarted","Data":"d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce"} Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.581661 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:00 crc kubenswrapper[4769]: I0122 14:02:00.582017 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.552778 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.557339 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerStarted","Data":"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.558257 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.571057 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerStarted","Data":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.571163 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" containerID="cri-o://c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" gracePeriod=30 Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.571206 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" containerID="cri-o://e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" gracePeriod=30 Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.574747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerStarted","Data":"df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee"} Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.578422 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" podStartSLOduration=42.578403751 podStartE2EDuration="42.578403751s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:01.574266209 +0000 UTC m=+1100.985376138" watchObservedRunningTime="2026-01-22 14:02:01.578403751 +0000 UTC m=+1100.989513680" Jan 22 14:02:01 crc kubenswrapper[4769]: I0122 14:02:01.600889 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=42.600873411 podStartE2EDuration="42.600873411s" podCreationTimestamp="2026-01-22 14:01:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:01.596310047 +0000 UTC m=+1101.007419976" watchObservedRunningTime="2026-01-22 14:02:01.600873411 +0000 UTC m=+1101.011983340" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.147170 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180430 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180699 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180797 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.180966 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") pod \"20251361-dc9f-403b-bffa-2a52a61e1bf4\" (UID: \"20251361-dc9f-403b-bffa-2a52a61e1bf4\") " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.181163 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.181646 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.181944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs" (OuterVolumeSpecName: "logs") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.189374 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts" (OuterVolumeSpecName: "scripts") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.189520 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6" (OuterVolumeSpecName: "kube-api-access-h46h6") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "kube-api-access-h46h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.193361 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.219096 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.239091 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data" (OuterVolumeSpecName: "config-data") pod "20251361-dc9f-403b-bffa-2a52a61e1bf4" (UID: "20251361-dc9f-403b-bffa-2a52a61e1bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283744 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20251361-dc9f-403b-bffa-2a52a61e1bf4-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283785 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283845 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283860 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283873 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20251361-dc9f-403b-bffa-2a52a61e1bf4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.283884 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h46h6\" (UniqueName: \"kubernetes.io/projected/20251361-dc9f-403b-bffa-2a52a61e1bf4-kube-api-access-h46h6\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.304258 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.385513 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583407 4769 generic.go:334] "Generic (PLEG): container finished" podID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" exitCode=143 Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583437 4769 generic.go:334] "Generic (PLEG): container finished" podID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" exitCode=143 Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583465 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583461 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerDied","Data":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583583 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerDied","Data":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20251361-dc9f-403b-bffa-2a52a61e1bf4","Type":"ContainerDied","Data":"64a2d7094305fe1e188755e9a76ea175f1aa7cbe4ae9900a3a9f08389e56e17f"} Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.583617 4769 scope.go:117] "RemoveContainer" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.601978 4769 scope.go:117] "RemoveContainer" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.625948 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.629829 4769 scope.go:117] "RemoveContainer" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.631817 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.637120 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": container with ID starting with e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71 not found: ID does not exist" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.637179 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} err="failed to get container status \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": rpc error: code = NotFound desc = could not find container \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": container with ID starting with e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71 not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.637207 4769 scope.go:117] "RemoveContainer" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.638014 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": container with ID starting with c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca not found: ID does not exist" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.638039 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} err="failed to get container status \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": rpc error: code = NotFound desc = could not find container \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": container with ID starting with c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.638057 4769 scope.go:117] "RemoveContainer" containerID="e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.640468 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71"} err="failed to get container status \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": rpc error: code = NotFound desc = could not find container \"e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71\": container with ID starting with e5f2cd1a50a52c4af55aebf79c50c4a2ae7b6a6c74ab0cbc059c7ac97bfd9f71 not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.640582 4769 scope.go:117] "RemoveContainer" containerID="c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.641193 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca"} err="failed to get container status \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": rpc error: code = NotFound desc = could not find container \"c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca\": container with ID starting with c2fe4bab42f1a6335843c2232b24a2f046f8d9e40dc64570d58733dd3060aaca not found: ID does not exist" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.648401 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.648903 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.648916 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" Jan 22 14:02:02 crc kubenswrapper[4769]: E0122 14:02:02.648939 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.648947 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.649121 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-log" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.649137 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" containerName="glance-httpd" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.650395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.655357 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.655598 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.665310 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.794002 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795121 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795208 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795255 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795285 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795309 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795346 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.795556 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.896900 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.896994 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897022 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897054 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897090 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897114 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20251361-dc9f-403b-bffa-2a52a61e1bf4" path="/var/lib/kubelet/pods/20251361-dc9f-403b-bffa-2a52a61e1bf4/volumes" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897179 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897206 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897368 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897283 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897748 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.897884 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.906017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.909105 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.909165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.918603 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.932847 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:02 crc kubenswrapper[4769]: I0122 14:02:02.952916 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:03 crc kubenswrapper[4769]: I0122 14:02:03.028979 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:03 crc kubenswrapper[4769]: I0122 14:02:03.598281 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerStarted","Data":"42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f"} Jan 22 14:02:03 crc kubenswrapper[4769]: I0122 14:02:03.633530 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:03 crc kubenswrapper[4769]: W0122 14:02:03.654291 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49bcd071_b172_4180_996d_a8494ce80ab7.slice/crio-c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922 WatchSource:0}: Error finding container c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922: Status 404 returned error can't find the container with id c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922 Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.625718 4769 generic.go:334] "Generic (PLEG): container finished" podID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerID="7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c" exitCode=0 Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.626282 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerDied","Data":"7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.637948 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerStarted","Data":"938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.638012 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerStarted","Data":"c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.643283 4769 generic.go:334] "Generic (PLEG): container finished" podID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerID="4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c" exitCode=0 Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.644417 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerDied","Data":"4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c"} Jan 22 14:02:04 crc kubenswrapper[4769]: I0122 14:02:04.660984 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.660961935 podStartE2EDuration="6.660961935s" podCreationTimestamp="2026-01-22 14:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:03.63442585 +0000 UTC m=+1103.045535779" watchObservedRunningTime="2026-01-22 14:02:04.660961935 +0000 UTC m=+1104.072071864" Jan 22 14:02:05 crc kubenswrapper[4769]: I0122 14:02:05.656957 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerStarted","Data":"a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07"} Jan 22 14:02:05 crc kubenswrapper[4769]: I0122 14:02:05.689475 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.689458383 podStartE2EDuration="3.689458383s" podCreationTimestamp="2026-01-22 14:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:05.681778075 +0000 UTC m=+1105.092888024" watchObservedRunningTime="2026-01-22 14:02:05.689458383 +0000 UTC m=+1105.100568312" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.163291 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.168660 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268349 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268468 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268496 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268518 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268572 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268601 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") pod \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\" (UID: \"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268651 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268688 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268740 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.268767 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") pod \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\" (UID: \"4b938618-acdf-4f5f-8a04-daabc17cbb0c\") " Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.269748 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs" (OuterVolumeSpecName: "logs") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.275027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.278435 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj" (OuterVolumeSpecName: "kube-api-access-dsgsj") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "kube-api-access-dsgsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.279264 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.279915 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp" (OuterVolumeSpecName: "kube-api-access-g78xp") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "kube-api-access-g78xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.279985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts" (OuterVolumeSpecName: "scripts") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.281207 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts" (OuterVolumeSpecName: "scripts") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.300879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.323349 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data" (OuterVolumeSpecName: "config-data") pod "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" (UID: "a0e92228-1a9b-49fc-9dfd-0493f70f5ee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.337083 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.338891 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data" (OuterVolumeSpecName: "config-data") pod "4b938618-acdf-4f5f-8a04-daabc17cbb0c" (UID: "4b938618-acdf-4f5f-8a04-daabc17cbb0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371013 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371054 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371066 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371075 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g78xp\" (UniqueName: \"kubernetes.io/projected/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-kube-api-access-g78xp\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371087 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371097 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371107 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsgsj\" (UniqueName: \"kubernetes.io/projected/4b938618-acdf-4f5f-8a04-daabc17cbb0c-kube-api-access-dsgsj\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371115 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371122 4769 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b938618-acdf-4f5f-8a04-daabc17cbb0c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371130 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.371137 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.673633 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bjdj8" event={"ID":"a0e92228-1a9b-49fc-9dfd-0493f70f5ee8","Type":"ContainerDied","Data":"db6d489e657294f84dd39f03818355418206b6b45168e98d6d149865405021b3"} Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.673957 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6d489e657294f84dd39f03818355418206b6b45168e98d6d149865405021b3" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.673656 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bjdj8" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.683069 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nv6tp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.684281 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nv6tp" event={"ID":"4b938618-acdf-4f5f-8a04-daabc17cbb0c","Type":"ContainerDied","Data":"01b2d0c9f44658986f8b11850550b9d2274d498a3edf3bf06e168e5ce6662ef9"} Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.684332 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b2d0c9f44658986f8b11850550b9d2274d498a3edf3bf06e168e5ce6662ef9" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.759606 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b8cb8655d-vl7kp"] Jan 22 14:02:06 crc kubenswrapper[4769]: E0122 14:02:06.760191 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerName="keystone-bootstrap" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760211 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerName="keystone-bootstrap" Jan 22 14:02:06 crc kubenswrapper[4769]: E0122 14:02:06.760245 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerName="placement-db-sync" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760253 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerName="placement-db-sync" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760480 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" containerName="placement-db-sync" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.760501 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" containerName="keystone-bootstrap" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.761810 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.771366 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.771490 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dx89d" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.771749 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.772133 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.781657 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.789924 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8cb8655d-vl7kp"] Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.842916 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d8d684bc6-pmxwh"] Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.845622 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.851526 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nrw5d" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.851705 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.851876 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.852098 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.852430 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.852568 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.857408 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8d684bc6-pmxwh"] Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880820 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-public-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880884 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-logs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880932 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-combined-ca-bundle\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-config-data\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.880975 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-scripts\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.881004 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzd9\" (UniqueName: \"kubernetes.io/projected/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-kube-api-access-9zzd9\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.881025 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-internal-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985508 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-public-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985603 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-public-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985680 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-logs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985706 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5bs\" (UniqueName: \"kubernetes.io/projected/ddb12191-d02d-4e79-82cd-d164ecaf2093-kube-api-access-lj5bs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.985734 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-config-data\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986428 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-combined-ca-bundle\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986466 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-config-data\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986485 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-combined-ca-bundle\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-credential-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-scripts\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-internal-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986912 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzd9\" (UniqueName: \"kubernetes.io/projected/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-kube-api-access-9zzd9\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.986978 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-internal-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.987034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-scripts\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.987132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-fernet-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.990170 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-logs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.992531 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-scripts\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.992555 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-public-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.995105 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-config-data\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.995680 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-internal-tls-certs\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:06 crc kubenswrapper[4769]: I0122 14:02:06.996547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-combined-ca-bundle\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.005625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzd9\" (UniqueName: \"kubernetes.io/projected/8d4588b0-8c00-47bf-8b6d-cab4a5d792ab-kube-api-access-9zzd9\") pod \"placement-6b8cb8655d-vl7kp\" (UID: \"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab\") " pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.088997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-fernet-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089098 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-public-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089168 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5bs\" (UniqueName: \"kubernetes.io/projected/ddb12191-d02d-4e79-82cd-d164ecaf2093-kube-api-access-lj5bs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-config-data\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089245 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-combined-ca-bundle\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089278 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-credential-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-internal-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.089366 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-scripts\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.092972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-fernet-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.093641 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-scripts\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.096776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-internal-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.097756 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-public-tls-certs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.098691 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-combined-ca-bundle\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.099073 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-credential-keys\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.100667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddb12191-d02d-4e79-82cd-d164ecaf2093-config-data\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.111254 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.111319 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5bs\" (UniqueName: \"kubernetes.io/projected/ddb12191-d02d-4e79-82cd-d164ecaf2093-kube-api-access-lj5bs\") pod \"keystone-d8d684bc6-pmxwh\" (UID: \"ddb12191-d02d-4e79-82cd-d164ecaf2093\") " pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.171591 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.628329 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b8cb8655d-vl7kp"] Jan 22 14:02:07 crc kubenswrapper[4769]: W0122 14:02:07.629874 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d4588b0_8c00_47bf_8b6d_cab4a5d792ab.slice/crio-7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75 WatchSource:0}: Error finding container 7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75: Status 404 returned error can't find the container with id 7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75 Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.689141 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8cb8655d-vl7kp" event={"ID":"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab","Type":"ContainerStarted","Data":"7aaf3cf879704b6a9b1748dea8137d65b446a7eb6eea9afd8ade0eb1a7ff6b75"} Jan 22 14:02:07 crc kubenswrapper[4769]: W0122 14:02:07.730474 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb12191_d02d_4e79_82cd_d164ecaf2093.slice/crio-aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75 WatchSource:0}: Error finding container aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75: Status 404 returned error can't find the container with id aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75 Jan 22 14:02:07 crc kubenswrapper[4769]: I0122 14:02:07.735985 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d8d684bc6-pmxwh"] Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.699781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8cb8655d-vl7kp" event={"ID":"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab","Type":"ContainerStarted","Data":"7ec46ba8e82a290a46ebf843a25c1a4fc603d2f84ba0a9b9cc0de812101e9505"} Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.701560 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8d684bc6-pmxwh" event={"ID":"ddb12191-d02d-4e79-82cd-d164ecaf2093","Type":"ContainerStarted","Data":"1617ab3d54fab1a56702f1417356dc7a33c92b9329ac93066aec0d9955c04658"} Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.701616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d8d684bc6-pmxwh" event={"ID":"ddb12191-d02d-4e79-82cd-d164ecaf2093","Type":"ContainerStarted","Data":"aba3a8e3b9dab7cab488446a18c06cf01760943537af472b9227915d3f382f75"} Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.701673 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.721322 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-d8d684bc6-pmxwh" podStartSLOduration=2.721301402 podStartE2EDuration="2.721301402s" podCreationTimestamp="2026-01-22 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:08.71500385 +0000 UTC m=+1108.126113779" watchObservedRunningTime="2026-01-22 14:02:08.721301402 +0000 UTC m=+1108.132411331" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.896861 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.896910 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.935268 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:02:08 crc kubenswrapper[4769]: I0122 14:02:08.939361 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.681458 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.726620 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.726673 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.775032 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:02:09 crc kubenswrapper[4769]: I0122 14:02:09.775630 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-twczw" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" containerID="cri-o://098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" gracePeriod=10 Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.357263 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.461610 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.473666 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474260 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474328 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474409 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.474481 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") pod \"650dfc14-f283-4318-b6bc-4b17cdea15fa\" (UID: \"650dfc14-f283-4318-b6bc-4b17cdea15fa\") " Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.479471 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk" (OuterVolumeSpecName: "kube-api-access-ssjkk") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "kube-api-access-ssjkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.481718 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.481758 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.523180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.531147 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config" (OuterVolumeSpecName: "config") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.531317 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.540386 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "650dfc14-f283-4318-b6bc-4b17cdea15fa" (UID: "650dfc14-f283-4318-b6bc-4b17cdea15fa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576892 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576934 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576947 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssjkk\" (UniqueName: \"kubernetes.io/projected/650dfc14-f283-4318-b6bc-4b17cdea15fa-kube-api-access-ssjkk\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576960 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.576972 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/650dfc14-f283-4318-b6bc-4b17cdea15fa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.582405 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cc4c8d8bd-69kmb" podUID="9a6a04bb-fa49-41f8-b75b-9c27873f8a1f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.742396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b8cb8655d-vl7kp" event={"ID":"8d4588b0-8c00-47bf-8b6d-cab4a5d792ab","Type":"ContainerStarted","Data":"824b009f42f5d4ef849d8ad3db01e1ccf33eb73bee32e627513e9c3e9f3bd7ed"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.743961 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.744023 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.752581 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763215 4769 generic.go:334] "Generic (PLEG): container finished" podID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" exitCode=0 Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763297 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-twczw" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerDied","Data":"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-twczw" event={"ID":"650dfc14-f283-4318-b6bc-4b17cdea15fa","Type":"ContainerDied","Data":"a54623f453232dd2973918c8cc988921d99892583486b82a39525e719c837225"} Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.763389 4769 scope.go:117] "RemoveContainer" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.776192 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b8cb8655d-vl7kp" podStartSLOduration=4.776173099 podStartE2EDuration="4.776173099s" podCreationTimestamp="2026-01-22 14:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:10.770128326 +0000 UTC m=+1110.181238255" watchObservedRunningTime="2026-01-22 14:02:10.776173099 +0000 UTC m=+1110.187283028" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.804543 4769 scope.go:117] "RemoveContainer" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.808693 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.836334 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-twczw"] Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.847816 4769 scope.go:117] "RemoveContainer" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" Jan 22 14:02:10 crc kubenswrapper[4769]: E0122 14:02:10.849195 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b\": container with ID starting with 098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b not found: ID does not exist" containerID="098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.849249 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b"} err="failed to get container status \"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b\": rpc error: code = NotFound desc = could not find container \"098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b\": container with ID starting with 098ee03ef551965af984bff04a29c55f7d0f27976988405cdc2003fa044f9d9b not found: ID does not exist" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.849274 4769 scope.go:117] "RemoveContainer" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" Jan 22 14:02:10 crc kubenswrapper[4769]: E0122 14:02:10.850156 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5\": container with ID starting with 1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5 not found: ID does not exist" containerID="1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.850184 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5"} err="failed to get container status \"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5\": rpc error: code = NotFound desc = could not find container \"1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5\": container with ID starting with 1b5e22d53825ab8bee8892212745d8ad1728568928a82c44731ac44eedd528b5 not found: ID does not exist" Jan 22 14:02:10 crc kubenswrapper[4769]: I0122 14:02:10.897245 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" path="/var/lib/kubelet/pods/650dfc14-f283-4318-b6bc-4b17cdea15fa/volumes" Jan 22 14:02:11 crc kubenswrapper[4769]: I0122 14:02:11.773206 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:11 crc kubenswrapper[4769]: I0122 14:02:11.773549 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.450785 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.485669 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.784621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerStarted","Data":"5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590"} Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.790592 4769 generic.go:334] "Generic (PLEG): container finished" podID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerID="3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b" exitCode=0 Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.790689 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerDied","Data":"3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b"} Jan 22 14:02:12 crc kubenswrapper[4769]: I0122 14:02:12.826825 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-l4hnw" podStartSLOduration=3.278495364 podStartE2EDuration="55.826784512s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="2026-01-22 14:01:18.946661322 +0000 UTC m=+1058.357771251" lastFinishedPulling="2026-01-22 14:02:11.49495047 +0000 UTC m=+1110.906060399" observedRunningTime="2026-01-22 14:02:12.813607104 +0000 UTC m=+1112.224717033" watchObservedRunningTime="2026-01-22 14:02:12.826784512 +0000 UTC m=+1112.237894441" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.029344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.029689 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.089374 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.100332 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.813720 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerStarted","Data":"fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811"} Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.815454 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.815506 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:13 crc kubenswrapper[4769]: I0122 14:02:13.842764 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-zzjpd" podStartSLOduration=3.1365269749999998 podStartE2EDuration="56.842745529s" podCreationTimestamp="2026-01-22 14:01:17 +0000 UTC" firstStartedPulling="2026-01-22 14:01:18.962989665 +0000 UTC m=+1058.374099594" lastFinishedPulling="2026-01-22 14:02:12.669208219 +0000 UTC m=+1112.080318148" observedRunningTime="2026-01-22 14:02:13.838397961 +0000 UTC m=+1113.249507890" watchObservedRunningTime="2026-01-22 14:02:13.842745529 +0000 UTC m=+1113.253855448" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.242865 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.359144 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") pod \"f7c0ef06-5806-418c-8a10-81ea6afb0401\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.359287 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") pod \"f7c0ef06-5806-418c-8a10-81ea6afb0401\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.359380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") pod \"f7c0ef06-5806-418c-8a10-81ea6afb0401\" (UID: \"f7c0ef06-5806-418c-8a10-81ea6afb0401\") " Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.387002 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc" (OuterVolumeSpecName: "kube-api-access-rzsdc") pod "f7c0ef06-5806-418c-8a10-81ea6afb0401" (UID: "f7c0ef06-5806-418c-8a10-81ea6afb0401"). InnerVolumeSpecName "kube-api-access-rzsdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.396888 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config" (OuterVolumeSpecName: "config") pod "f7c0ef06-5806-418c-8a10-81ea6afb0401" (UID: "f7c0ef06-5806-418c-8a10-81ea6afb0401"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.401964 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7c0ef06-5806-418c-8a10-81ea6afb0401" (UID: "f7c0ef06-5806-418c-8a10-81ea6afb0401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.462076 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.462116 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7c0ef06-5806-418c-8a10-81ea6afb0401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.462136 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzsdc\" (UniqueName: \"kubernetes.io/projected/f7c0ef06-5806-418c-8a10-81ea6afb0401-kube-api-access-rzsdc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.824162 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rqjpw" Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.824179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rqjpw" event={"ID":"f7c0ef06-5806-418c-8a10-81ea6afb0401","Type":"ContainerDied","Data":"f5f34c732ee37b95ec899f49855f9cce53d55317437fe6fd87284898a608994d"} Jan 22 14:02:14 crc kubenswrapper[4769]: I0122 14:02:14.824254 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f34c732ee37b95ec899f49855f9cce53d55317437fe6fd87284898a608994d" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.100812 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:15 crc kubenswrapper[4769]: E0122 14:02:15.101215 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101238 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" Jan 22 14:02:15 crc kubenswrapper[4769]: E0122 14:02:15.101257 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerName="neutron-db-sync" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101265 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerName="neutron-db-sync" Jan 22 14:02:15 crc kubenswrapper[4769]: E0122 14:02:15.101281 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="init" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101289 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="init" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101535 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="650dfc14-f283-4318-b6bc-4b17cdea15fa" containerName="dnsmasq-dns" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.101581 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" containerName="neutron-db-sync" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.102713 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.140382 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.203914 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.206269 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216219 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216440 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216723 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216834 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.216963 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7p5j2" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.227972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284389 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284446 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284521 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284553 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.284580 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.385931 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.385989 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386040 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386123 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386256 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.386996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.387636 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.387738 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.388109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.388478 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.411687 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"dnsmasq-dns-84b966f6c9-86ktd\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.453037 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488493 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488541 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.488570 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.500606 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.500820 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.501660 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.514686 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.525498 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"neutron-7ffdb95bfd-x5vfj\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.551208 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.831572 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:15 crc kubenswrapper[4769]: I0122 14:02:15.831926 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.057531 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:16 crc kubenswrapper[4769]: W0122 14:02:16.067301 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc490b1f2_d1fa_4db7_8aeb_97c8bb694323.slice/crio-793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710 WatchSource:0}: Error finding container 793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710: Status 404 returned error can't find the container with id 793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710 Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.231990 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.351585 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.438697 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.734376 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b8cb8655d-vl7kp" Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.875111 4769 generic.go:334] "Generic (PLEG): container finished" podID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" exitCode=0 Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.875540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerDied","Data":"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1"} Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.875579 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerStarted","Data":"793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710"} Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.929549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerStarted","Data":"c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79"} Jan 22 14:02:16 crc kubenswrapper[4769]: I0122 14:02:16.929615 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerStarted","Data":"7728df5824bdc02cf7f433c8c65dbea0209e0b45bf371c7fd3ff2a02c06db9ef"} Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.342819 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d6bcd56b9-2hx4m"] Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.345191 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.346827 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d6bcd56b9-2hx4m"] Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.349385 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.353066 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483817 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qfb\" (UniqueName: \"kubernetes.io/projected/a582ad75-7aa2-4ee6-9631-6726b7db9650-kube-api-access-85qfb\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483851 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-internal-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-combined-ca-bundle\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483946 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-public-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.483963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-ovndb-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.484034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-httpd-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.585622 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-httpd-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586032 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586088 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qfb\" (UniqueName: \"kubernetes.io/projected/a582ad75-7aa2-4ee6-9631-6726b7db9650-kube-api-access-85qfb\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586123 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-internal-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586174 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-combined-ca-bundle\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586222 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-public-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.586242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-ovndb-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.592828 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-internal-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.593495 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-httpd-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.593936 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-config\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.594090 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-ovndb-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.599331 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-public-tls-certs\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.601992 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qfb\" (UniqueName: \"kubernetes.io/projected/a582ad75-7aa2-4ee6-9631-6726b7db9650-kube-api-access-85qfb\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.610156 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a582ad75-7aa2-4ee6-9631-6726b7db9650-combined-ca-bundle\") pod \"neutron-5d6bcd56b9-2hx4m\" (UID: \"a582ad75-7aa2-4ee6-9631-6726b7db9650\") " pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.686374 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.926768 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerStarted","Data":"1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0"} Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.927024 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.929938 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerStarted","Data":"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab"} Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.930326 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.963460 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7ffdb95bfd-x5vfj" podStartSLOduration=2.963442402 podStartE2EDuration="2.963442402s" podCreationTimestamp="2026-01-22 14:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:17.948427105 +0000 UTC m=+1117.359537044" watchObservedRunningTime="2026-01-22 14:02:17.963442402 +0000 UTC m=+1117.374552331" Jan 22 14:02:17 crc kubenswrapper[4769]: I0122 14:02:17.975998 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" podStartSLOduration=2.975976912 podStartE2EDuration="2.975976912s" podCreationTimestamp="2026-01-22 14:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:17.971073849 +0000 UTC m=+1117.382183798" watchObservedRunningTime="2026-01-22 14:02:17.975976912 +0000 UTC m=+1117.387086831" Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.460051 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d6bcd56b9-2hx4m"] Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.940896 4769 generic.go:334] "Generic (PLEG): container finished" podID="a7f766e1-262c-4861-a117-2454631e284f" containerID="fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811" exitCode=0 Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.941098 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerDied","Data":"fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811"} Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.944036 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6bcd56b9-2hx4m" event={"ID":"a582ad75-7aa2-4ee6-9631-6726b7db9650","Type":"ContainerStarted","Data":"5a0f367e33b6d3fac05f5d699bddf82b4168cc01b56962481ed708c42f0fa01e"} Jan 22 14:02:18 crc kubenswrapper[4769]: I0122 14:02:18.944074 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6bcd56b9-2hx4m" event={"ID":"a582ad75-7aa2-4ee6-9631-6726b7db9650","Type":"ContainerStarted","Data":"3b057196b0db48832fa9a6e783c46500568af399275ddd4bc07b7490dfe7e4d5"} Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.460807 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.582198 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7cc4c8d8bd-69kmb" podUID="9a6a04bb-fa49-41f8-b75b-9c27873f8a1f" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.962942 4769 generic.go:334] "Generic (PLEG): container finished" podID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerID="5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590" exitCode=0 Jan 22 14:02:20 crc kubenswrapper[4769]: I0122 14:02:20.962986 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerDied","Data":"5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590"} Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.561665 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.702614 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") pod \"a7f766e1-262c-4861-a117-2454631e284f\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.702784 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") pod \"a7f766e1-262c-4861-a117-2454631e284f\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.702971 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") pod \"a7f766e1-262c-4861-a117-2454631e284f\" (UID: \"a7f766e1-262c-4861-a117-2454631e284f\") " Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.709568 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a7f766e1-262c-4861-a117-2454631e284f" (UID: "a7f766e1-262c-4861-a117-2454631e284f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.709749 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7" (OuterVolumeSpecName: "kube-api-access-pbsw7") pod "a7f766e1-262c-4861-a117-2454631e284f" (UID: "a7f766e1-262c-4861-a117-2454631e284f"). InnerVolumeSpecName "kube-api-access-pbsw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.739241 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7f766e1-262c-4861-a117-2454631e284f" (UID: "a7f766e1-262c-4861-a117-2454631e284f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.804870 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.804909 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f766e1-262c-4861-a117-2454631e284f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:23 crc kubenswrapper[4769]: I0122 14:02:23.804920 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbsw7\" (UniqueName: \"kubernetes.io/projected/a7f766e1-262c-4861-a117-2454631e284f-kube-api-access-pbsw7\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.011271 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-zzjpd" event={"ID":"a7f766e1-262c-4861-a117-2454631e284f","Type":"ContainerDied","Data":"d9766e548e18d10e2948ccf9973b496ef374cc1f1a4772a78ff7fa96b507f7e2"} Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.011580 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9766e548e18d10e2948ccf9973b496ef374cc1f1a4772a78ff7fa96b507f7e2" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.011650 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-zzjpd" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.032234 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-l4hnw" event={"ID":"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06","Type":"ContainerDied","Data":"81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6"} Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.032279 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81f9fccf6c7c0251061ae1067ee4088dd1acc6cd4f8ca50a99ec0953acadb3c6" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.041742 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109421 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109469 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109513 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109598 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109718 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109759 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") pod \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\" (UID: \"3eb8819f-512d-43d8-a59e-1ba8e7e1fb06\") " Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.109978 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.110297 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.113180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.113437 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts" (OuterVolumeSpecName: "scripts") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.121212 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx" (OuterVolumeSpecName: "kube-api-access-hrgpx") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "kube-api-access-hrgpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.184895 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.201981 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data" (OuterVolumeSpecName: "config-data") pod "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" (UID: "3eb8819f-512d-43d8-a59e-1ba8e7e1fb06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212294 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212335 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212351 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrgpx\" (UniqueName: \"kubernetes.io/projected/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-kube-api-access-hrgpx\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212366 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.212377 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:24 crc kubenswrapper[4769]: E0122 14:02:24.282147 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.830672 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-79fdf5695-77th5"] Jan 22 14:02:24 crc kubenswrapper[4769]: E0122 14:02:24.831481 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f766e1-262c-4861-a117-2454631e284f" containerName="barbican-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831502 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f766e1-262c-4861-a117-2454631e284f" containerName="barbican-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: E0122 14:02:24.831552 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerName="cinder-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831561 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerName="cinder-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831785 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" containerName="cinder-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.831841 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f766e1-262c-4861-a117-2454631e284f" containerName="barbican-db-sync" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.832980 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.836825 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.839079 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qkkxv" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.839301 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.840853 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79fdf5695-77th5"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.912860 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fffc955cd-tlfq2"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.914565 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.924507 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926113 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926164 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d271baa-4d4e-42f2-87ec-a0c8a7314560-logs\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926221 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvv7f\" (UniqueName: \"kubernetes.io/projected/2d271baa-4d4e-42f2-87ec-a0c8a7314560-kube-api-access-pvv7f\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926406 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-combined-ca-bundle\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.926511 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data-custom\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.942339 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fffc955cd-tlfq2"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.966354 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.966608 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" containerID="cri-o://dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" gracePeriod=10 Jan 22 14:02:24 crc kubenswrapper[4769]: I0122 14:02:24.981967 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.017052 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.018559 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.028976 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ced7731-706e-49ab-8e05-af9f7dc7465a-logs\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-combined-ca-bundle\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029057 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7kz\" (UniqueName: \"kubernetes.io/projected/1ced7731-706e-49ab-8e05-af9f7dc7465a-kube-api-access-fm7kz\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data-custom\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029129 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data-custom\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029259 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-combined-ca-bundle\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029292 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029312 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d271baa-4d4e-42f2-87ec-a0c8a7314560-logs\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029334 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvv7f\" (UniqueName: \"kubernetes.io/projected/2d271baa-4d4e-42f2-87ec-a0c8a7314560-kube-api-access-pvv7f\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.029374 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.038029 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d271baa-4d4e-42f2-87ec-a0c8a7314560-logs\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.039361 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.045964 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.052501 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-combined-ca-bundle\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.068595 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d271baa-4d4e-42f2-87ec-a0c8a7314560-config-data-custom\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.094767 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvv7f\" (UniqueName: \"kubernetes.io/projected/2d271baa-4d4e-42f2-87ec-a0c8a7314560-kube-api-access-pvv7f\") pod \"barbican-worker-79fdf5695-77th5\" (UID: \"2d271baa-4d4e-42f2-87ec-a0c8a7314560\") " pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116238 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerStarted","Data":"de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a"} Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116356 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" containerID="cri-o://0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a" gracePeriod=30 Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116421 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116507 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" containerID="cri-o://de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a" gracePeriod=30 Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.116556 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" containerID="cri-o://bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217" gracePeriod=30 Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.123155 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.124560 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.132699 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ced7731-706e-49ab-8e05-af9f7dc7465a-logs\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133812 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm7kz\" (UniqueName: \"kubernetes.io/projected/1ced7731-706e-49ab-8e05-af9f7dc7465a-kube-api-access-fm7kz\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.133839 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.134596 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ced7731-706e-49ab-8e05-af9f7dc7465a-logs\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137202 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data-custom\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137773 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-combined-ca-bundle\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137862 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.137934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.144244 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-combined-ca-bundle\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.145696 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.152481 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-79fdf5695-77th5" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.162526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ced7731-706e-49ab-8e05-af9f7dc7465a-config-data-custom\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.168289 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-l4hnw" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.169567 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d6bcd56b9-2hx4m" event={"ID":"a582ad75-7aa2-4ee6-9631-6726b7db9650","Type":"ContainerStarted","Data":"82a874788375fae26a0951e4470e5e91fb777e86404e359d8d7d7bad73728bb6"} Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.169965 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.173617 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm7kz\" (UniqueName: \"kubernetes.io/projected/1ced7731-706e-49ab-8e05-af9f7dc7465a-kube-api-access-fm7kz\") pod \"barbican-keystone-listener-fffc955cd-tlfq2\" (UID: \"1ced7731-706e-49ab-8e05-af9f7dc7465a\") " pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.219664 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240033 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240096 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240152 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240365 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240399 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240424 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.240495 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.242765 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.242957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.243153 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.243719 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.250206 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.262178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.273975 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d6bcd56b9-2hx4m" podStartSLOduration=8.273955787 podStartE2EDuration="8.273955787s" podCreationTimestamp="2026-01-22 14:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:25.221838953 +0000 UTC m=+1124.632948892" watchObservedRunningTime="2026-01-22 14:02:25.273955787 +0000 UTC m=+1124.685065716" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.317450 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"dnsmasq-dns-75c8ddd69c-6tm8v\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341812 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341944 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.341973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.342555 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.348675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.365561 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.371090 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.385429 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"barbican-api-6bc9c49fb8-n7dm2\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.403734 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.408105 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.413691 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.413839 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-m6vjl" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.414072 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.414201 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.443962 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444013 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444037 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444063 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444110 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.444163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.446180 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.454586 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548313 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548370 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548396 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548423 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548459 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.548487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.558286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.561831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.563178 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.579743 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.591615 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.598781 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.599657 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.631361 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.641065 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"cinder-scheduler-0\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.672054 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.716274 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760496 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760557 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760593 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760626 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760674 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.760786 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.763020 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.789291 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.792464 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.799164 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.832335 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864366 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864394 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864492 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864518 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864562 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864621 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864643 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.864783 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.865754 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.865929 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.866402 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.866840 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.872495 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.889863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"dnsmasq-dns-5784cf869f-gjxrr\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966068 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966144 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966173 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966293 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966408 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966459 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966699 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.966922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.971393 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.975054 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.976704 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.978126 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:25 crc kubenswrapper[4769]: I0122 14:02:25.984503 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"cinder-api-0\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " pod="openstack/cinder-api-0" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.100503 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.145197 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fffc955cd-tlfq2"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.171478 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183014 4769 generic.go:334] "Generic (PLEG): container finished" podID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" exitCode=0 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183070 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183089 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerDied","Data":"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183120 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-86ktd" event={"ID":"c490b1f2-d1fa-4db7-8aeb-97c8bb694323","Type":"ContainerDied","Data":"793616f841995ae0490e98118d3493c2f1448e1097fa4b42bba1bfcb0fff0710"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.183138 4769 scope.go:117] "RemoveContainer" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.184483 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" event={"ID":"1ced7731-706e-49ab-8e05-af9f7dc7465a","Type":"ContainerStarted","Data":"c15dda8bdf2b7e8286d94f00e80ce04f6039691eef8d0e2a5c3246fe9de51dc2"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187519 4769 generic.go:334] "Generic (PLEG): container finished" podID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerID="de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a" exitCode=0 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187599 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187632 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187608 4769 generic.go:334] "Generic (PLEG): container finished" podID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerID="bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217" exitCode=2 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187659 4769 generic.go:334] "Generic (PLEG): container finished" podID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerID="0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a" exitCode=0 Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.187757 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a"} Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.220584 4769 scope.go:117] "RemoveContainer" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.239942 4769 scope.go:117] "RemoveContainer" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" Jan 22 14:02:26 crc kubenswrapper[4769]: E0122 14:02:26.240372 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab\": container with ID starting with dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab not found: ID does not exist" containerID="dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.240403 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab"} err="failed to get container status \"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab\": rpc error: code = NotFound desc = could not find container \"dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab\": container with ID starting with dcf65cb3f9e2afa84af423f382b410a4f6ad273e1b71084aa7b89b603bbfc0ab not found: ID does not exist" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.240424 4769 scope.go:117] "RemoveContainer" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" Jan 22 14:02:26 crc kubenswrapper[4769]: E0122 14:02:26.240778 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1\": container with ID starting with 20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1 not found: ID does not exist" containerID="20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.241157 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1"} err="failed to get container status \"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1\": rpc error: code = NotFound desc = could not find container \"20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1\": container with ID starting with 20368f0045746ae0eecdaf41771b04b1db51dc750b5f58a1ea919250b07080f1 not found: ID does not exist" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.262198 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277056 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277115 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277164 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277209 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277261 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.277347 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") pod \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\" (UID: \"c490b1f2-d1fa-4db7-8aeb-97c8bb694323\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.284379 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj" (OuterVolumeSpecName: "kube-api-access-7zhpj") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "kube-api-access-7zhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.297593 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-79fdf5695-77th5"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.306615 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.379169 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zhpj\" (UniqueName: \"kubernetes.io/projected/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-kube-api-access-7zhpj\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.383974 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.394337 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.398297 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config" (OuterVolumeSpecName: "config") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.402436 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.407804 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c490b1f2-d1fa-4db7-8aeb-97c8bb694323" (UID: "c490b1f2-d1fa-4db7-8aeb-97c8bb694323"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.479192 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480408 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480430 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480441 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480450 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.480460 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c490b1f2-d1fa-4db7-8aeb-97c8bb694323-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:26 crc kubenswrapper[4769]: W0122 14:02:26.481889 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4383579e_af20_4ae8_89f7_bdaf6480881a.slice/crio-f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc WatchSource:0}: Error finding container f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc: Status 404 returned error can't find the container with id f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.596886 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.726864 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.879771 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.915178 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.944498 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.953691 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-86ktd"] Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.988891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.989213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.989649 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.989817 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.990015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.990128 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.990238 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") pod \"7464458e-c450-4b87-80d6-30abeb62e9d2\" (UID: \"7464458e-c450-4b87-80d6-30abeb62e9d2\") " Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.991258 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.991716 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.995561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts" (OuterVolumeSpecName: "scripts") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:26 crc kubenswrapper[4769]: I0122 14:02:26.995556 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr" (OuterVolumeSpecName: "kube-api-access-bnkhr") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "kube-api-access-bnkhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.045505 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092557 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092590 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092605 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnkhr\" (UniqueName: \"kubernetes.io/projected/7464458e-c450-4b87-80d6-30abeb62e9d2-kube-api-access-bnkhr\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092616 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.092627 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7464458e-c450-4b87-80d6-30abeb62e9d2-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.093883 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.097151 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data" (OuterVolumeSpecName: "config-data") pod "7464458e-c450-4b87-80d6-30abeb62e9d2" (UID: "7464458e-c450-4b87-80d6-30abeb62e9d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.195931 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.195992 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7464458e-c450-4b87-80d6-30abeb62e9d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.199262 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerStarted","Data":"959f5ec3a165a64e510bc22f94aef93dcf00ba618851c77ce98857a8cd8feb32"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.201670 4769 generic.go:334] "Generic (PLEG): container finished" podID="626171a3-dca4-4c26-9879-4127f41d2543" containerID="209229b23f1b1a54f7e75b6d45c01d01fc6ff63ee1dd1e208ead8428de3d7cca" exitCode=0 Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.201744 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" event={"ID":"626171a3-dca4-4c26-9879-4127f41d2543","Type":"ContainerDied","Data":"209229b23f1b1a54f7e75b6d45c01d01fc6ff63ee1dd1e208ead8428de3d7cca"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.201805 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" event={"ID":"626171a3-dca4-4c26-9879-4127f41d2543","Type":"ContainerStarted","Data":"849a951f9f8aa32b267dc7a128a172f08b4ef52390b9e79aa78ce1d223d66cba"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.214892 4769 generic.go:334] "Generic (PLEG): container finished" podID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerID="8cddcdbb8911a19c3b16e342ad30ed08a0f42dc1a1d70ee5aaed962fdb512de3" exitCode=0 Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.215003 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerDied","Data":"8cddcdbb8911a19c3b16e342ad30ed08a0f42dc1a1d70ee5aaed962fdb512de3"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.215275 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerStarted","Data":"d6c99dc7e96389aa270b082a25059df7fce55051d25083a5534ef853a5abe126"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.224060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerStarted","Data":"f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.237717 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerStarted","Data":"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.237758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerStarted","Data":"9af8e79839bd151effc1aa29a1d456de2993b92396c6ddf4772fc15ecf95323b"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.247178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7464458e-c450-4b87-80d6-30abeb62e9d2","Type":"ContainerDied","Data":"21b21bef7c85b718cfdbb016fe626efbd1ab870c4b734875a383413b1b9ca2cc"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.247242 4769 scope.go:117] "RemoveContainer" containerID="de8d0b9e577390cb06c5c39aa9aa3dc44fef05360ada1ac35892600534d6f60a" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.247290 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.267524 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79fdf5695-77th5" event={"ID":"2d271baa-4d4e-42f2-87ec-a0c8a7314560","Type":"ContainerStarted","Data":"6f0f4cabb7f607a85e05f6796ffa4125f9f0133df87665b8443130a4140d00af"} Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.344634 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.363817 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.414419 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415333 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415366 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415401 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415409 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415431 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415438 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415454 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415460 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" Jan 22 14:02:27 crc kubenswrapper[4769]: E0122 14:02:27.415480 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="init" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415487 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="init" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.415974 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="sg-core" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.416009 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="proxy-httpd" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.416041 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" containerName="ceilometer-notification-agent" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.416069 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" containerName="dnsmasq-dns" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.418761 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.421453 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.421816 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.435592 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.604576 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605107 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605152 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605423 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.605517 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707308 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707478 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707549 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707882 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707911 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.707933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.708642 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.708972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.715661 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.716008 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.721294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.726218 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.735881 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"ceilometer-0\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " pod="openstack/ceilometer-0" Jan 22 14:02:27 crc kubenswrapper[4769]: I0122 14:02:27.898169 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.040405 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216415 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216558 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216676 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216700 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216751 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.216828 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") pod \"626171a3-dca4-4c26-9879-4127f41d2543\" (UID: \"626171a3-dca4-4c26-9879-4127f41d2543\") " Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.220710 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p" (OuterVolumeSpecName: "kube-api-access-x477p") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "kube-api-access-x477p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.238214 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.241409 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config" (OuterVolumeSpecName: "config") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.241561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.242573 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.242703 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "626171a3-dca4-4c26-9879-4127f41d2543" (UID: "626171a3-dca4-4c26-9879-4127f41d2543"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.275781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerStarted","Data":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.277502 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.277973 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-6tm8v" event={"ID":"626171a3-dca4-4c26-9879-4127f41d2543","Type":"ContainerDied","Data":"849a951f9f8aa32b267dc7a128a172f08b4ef52390b9e79aa78ce1d223d66cba"} Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.280411 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerStarted","Data":"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20"} Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.281507 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.281580 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.288729 4769 scope.go:117] "RemoveContainer" containerID="bea22c9f83f03abc375d02e9ba136f822fe98bedf79bd391257fedebc9743217" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.309673 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podStartSLOduration=3.3096536690000002 podStartE2EDuration="3.309653669s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:28.300156202 +0000 UTC m=+1127.711266141" watchObservedRunningTime="2026-01-22 14:02:28.309653669 +0000 UTC m=+1127.720763598" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321182 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321216 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321228 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x477p\" (UniqueName: \"kubernetes.io/projected/626171a3-dca4-4c26-9879-4127f41d2543-kube-api-access-x477p\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321240 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321252 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.321263 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/626171a3-dca4-4c26-9879-4127f41d2543-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.340519 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.350115 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-6tm8v"] Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.375959 4769 scope.go:117] "RemoveContainer" containerID="0caf44996649384d0bbc9bf8f4235fe301ea6cdb45a76523aeef46f47efee20a" Jan 22 14:02:28 crc kubenswrapper[4769]: I0122 14:02:28.468053 4769 scope.go:117] "RemoveContainer" containerID="209229b23f1b1a54f7e75b6d45c01d01fc6ff63ee1dd1e208ead8428de3d7cca" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.345336 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="626171a3-dca4-4c26-9879-4127f41d2543" path="/var/lib/kubelet/pods/626171a3-dca4-4c26-9879-4127f41d2543/volumes" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.356711 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7464458e-c450-4b87-80d6-30abeb62e9d2" path="/var/lib/kubelet/pods/7464458e-c450-4b87-80d6-30abeb62e9d2/volumes" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.357928 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c490b1f2-d1fa-4db7-8aeb-97c8bb694323" path="/var/lib/kubelet/pods/c490b1f2-d1fa-4db7-8aeb-97c8bb694323/volumes" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.366711 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.370602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerStarted","Data":"fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8"} Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.372524 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.380200 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" event={"ID":"1ced7731-706e-49ab-8e05-af9f7dc7465a","Type":"ContainerStarted","Data":"65eb749f9ee1ea25ed9259f38da2dd786dfee88fb385aca20cfb1072c7036290"} Jan 22 14:02:29 crc kubenswrapper[4769]: W0122 14:02:29.393108 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode12c3fd8_b199_4dbb_8022_ea1997362b45.slice/crio-6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4 WatchSource:0}: Error finding container 6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4: Status 404 returned error can't find the container with id 6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4 Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.396737 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" podStartSLOduration=4.396712315 podStartE2EDuration="4.396712315s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:29.392921411 +0000 UTC m=+1128.804031350" watchObservedRunningTime="2026-01-22 14:02:29.396712315 +0000 UTC m=+1128.807822244" Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.397334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79fdf5695-77th5" event={"ID":"2d271baa-4d4e-42f2-87ec-a0c8a7314560","Type":"ContainerStarted","Data":"8f86e4936d00108837d120533e409cfd99d6a44762e0c92786aa925fd1727a56"} Jan 22 14:02:29 crc kubenswrapper[4769]: I0122 14:02:29.471933 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.456736 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.457284 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.459702 4769 generic.go:334] "Generic (PLEG): container finished" podID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerID="75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17" exitCode=137 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.459945 4769 generic.go:334] "Generic (PLEG): container finished" podID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerID="24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c" exitCode=137 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.460091 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerDied","Data":"75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.460144 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerDied","Data":"24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.473369 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.473613 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-79fdf5695-77th5" event={"ID":"2d271baa-4d4e-42f2-87ec-a0c8a7314560","Type":"ContainerStarted","Data":"9f6f770e0e0c87d16cef983ad2564a7a8925aa20d641e0e0a9d7c39d098160dc"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.475728 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerStarted","Data":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.475940 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" containerID="cri-o://7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" gracePeriod=30 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.476250 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.476337 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" containerID="cri-o://a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" gracePeriod=30 Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.481428 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerStarted","Data":"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.537676 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.537652971 podStartE2EDuration="5.537652971s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:30.527484546 +0000 UTC m=+1129.938594475" watchObservedRunningTime="2026-01-22 14:02:30.537652971 +0000 UTC m=+1129.948762900" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.551696 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" event={"ID":"1ced7731-706e-49ab-8e05-af9f7dc7465a","Type":"ContainerStarted","Data":"b2817ce04426bef01585797fb018136cc8619d5bb0b65d15bba8d2eeb6f1154f"} Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.580045 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fffc955cd-tlfq2" podStartSLOduration=4.375540776 podStartE2EDuration="6.58002744s" podCreationTimestamp="2026-01-22 14:02:24 +0000 UTC" firstStartedPulling="2026-01-22 14:02:26.155972803 +0000 UTC m=+1125.567082732" lastFinishedPulling="2026-01-22 14:02:28.360459477 +0000 UTC m=+1127.771569396" observedRunningTime="2026-01-22 14:02:30.570178063 +0000 UTC m=+1129.981287992" watchObservedRunningTime="2026-01-22 14:02:30.58002744 +0000 UTC m=+1129.991137369" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.582603 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-79fdf5695-77th5" podStartSLOduration=4.54352623 podStartE2EDuration="6.58259646s" podCreationTimestamp="2026-01-22 14:02:24 +0000 UTC" firstStartedPulling="2026-01-22 14:02:26.341701768 +0000 UTC m=+1125.752811697" lastFinishedPulling="2026-01-22 14:02:28.380771978 +0000 UTC m=+1127.791881927" observedRunningTime="2026-01-22 14:02:30.551309792 +0000 UTC m=+1129.962419731" watchObservedRunningTime="2026-01-22 14:02:30.58259646 +0000 UTC m=+1129.993706389" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.638116 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.638172 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.639109 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.639179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.639334 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") pod \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\" (UID: \"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1\") " Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.645334 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.646453 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs" (OuterVolumeSpecName: "logs") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.651106 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts" (OuterVolumeSpecName: "kube-api-access-q7lts") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "kube-api-access-q7lts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.667946 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data" (OuterVolumeSpecName: "config-data") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.672619 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts" (OuterVolumeSpecName: "scripts") pod "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" (UID: "c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742812 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742859 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742873 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7lts\" (UniqueName: \"kubernetes.io/projected/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-kube-api-access-q7lts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742889 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:30 crc kubenswrapper[4769]: I0122 14:02:30.742906 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.353510 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453595 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453635 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453737 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453757 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453926 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453948 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.453972 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") pod \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\" (UID: \"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5\") " Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.454320 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.455361 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs" (OuterVolumeSpecName: "logs") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.461477 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts" (OuterVolumeSpecName: "scripts") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.461854 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6" (OuterVolumeSpecName: "kube-api-access-xnwg6") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "kube-api-access-xnwg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.463074 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.496860 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.533301 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data" (OuterVolumeSpecName: "config-data") pod "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" (UID: "e5e24dd8-a4f7-4190-a34a-e1d3e92589e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556872 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556900 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556909 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556920 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556928 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556936 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnwg6\" (UniqueName: \"kubernetes.io/projected/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-kube-api-access-xnwg6\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.556947 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.586057 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-88b8d5fbf-mdp8d" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.586692 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-88b8d5fbf-mdp8d" event={"ID":"c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1","Type":"ContainerDied","Data":"054e89b41fe504baa24efa6fdc5ef87502ed22b3b42e8052873a0df4c426e7ed"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.586724 4769 scope.go:117] "RemoveContainer" containerID="75092d5e878bea8006c178193d6c6e4dcc97bd9265416f68b45c587a530c6f17" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598849 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" exitCode=0 Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598880 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" exitCode=143 Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598915 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerDied","Data":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598942 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerDied","Data":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.598952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e5e24dd8-a4f7-4190-a34a-e1d3e92589e5","Type":"ContainerDied","Data":"959f5ec3a165a64e510bc22f94aef93dcf00ba618851c77ce98857a8cd8feb32"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.599004 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.612586 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerStarted","Data":"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.630652 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223"} Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.645045 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.767595781 podStartE2EDuration="6.645026268s" podCreationTimestamp="2026-01-22 14:02:25 +0000 UTC" firstStartedPulling="2026-01-22 14:02:26.482906717 +0000 UTC m=+1125.894016646" lastFinishedPulling="2026-01-22 14:02:28.360337204 +0000 UTC m=+1127.771447133" observedRunningTime="2026-01-22 14:02:31.640189967 +0000 UTC m=+1131.051299906" watchObservedRunningTime="2026-01-22 14:02:31.645026268 +0000 UTC m=+1131.056136197" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.710184 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.725154 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-88b8d5fbf-mdp8d"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.737575 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.745878 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763234 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763619 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763631 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763654 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763660 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763673 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763680 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763700 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763705 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.763717 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="626171a3-dca4-4c26-9879-4127f41d2543" containerName="init" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763722 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="626171a3-dca4-4c26-9879-4127f41d2543" containerName="init" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763926 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763952 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" containerName="horizon-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763967 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api-log" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.763980 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="626171a3-dca4-4c26-9879-4127f41d2543" containerName="init" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.764001 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" containerName="cinder-api" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.765028 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.767890 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.769236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.770271 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.828293 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-scripts\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862705 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvzq\" (UniqueName: \"kubernetes.io/projected/f66670ed-ef72-4a45-be6e-add4b5f52f94-kube-api-access-ggvzq\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862759 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66670ed-ef72-4a45-be6e-add4b5f52f94-logs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862785 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862861 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862893 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862910 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.862991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66670ed-ef72-4a45-be6e-add4b5f52f94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.871582 4769 scope.go:117] "RemoveContainer" containerID="24eeffe407e1855bc1e9fc29cbf3704d433191018da0d18584697247b2cdeb5c" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.889660 4769 scope.go:117] "RemoveContainer" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.921323 4769 scope.go:117] "RemoveContainer" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964510 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964565 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964593 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964676 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66670ed-ef72-4a45-be6e-add4b5f52f94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-scripts\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.964980 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvzq\" (UniqueName: \"kubernetes.io/projected/f66670ed-ef72-4a45-be6e-add4b5f52f94-kube-api-access-ggvzq\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965041 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66670ed-ef72-4a45-be6e-add4b5f52f94-logs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965066 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965091 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965689 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f66670ed-ef72-4a45-be6e-add4b5f52f94-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.965976 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f66670ed-ef72-4a45-be6e-add4b5f52f94-logs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.971476 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-scripts\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.971846 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data-custom\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.972280 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.972339 4769 scope.go:117] "RemoveContainer" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.973970 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": container with ID starting with a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659 not found: ID does not exist" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974004 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} err="failed to get container status \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": rpc error: code = NotFound desc = could not find container \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": container with ID starting with a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974042 4769 scope.go:117] "RemoveContainer" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: E0122 14:02:31.974443 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": container with ID starting with 7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4 not found: ID does not exist" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974460 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} err="failed to get container status \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": rpc error: code = NotFound desc = could not find container \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": container with ID starting with 7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974483 4769 scope.go:117] "RemoveContainer" containerID="a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974746 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659"} err="failed to get container status \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": rpc error: code = NotFound desc = could not find container \"a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659\": container with ID starting with a205fe5461bfbab00c4675fbef39da8e3cdeb4e605ab0a552ade19769edae659 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.974765 4769 scope.go:117] "RemoveContainer" containerID="7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.975049 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4"} err="failed to get container status \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": rpc error: code = NotFound desc = could not find container \"7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4\": container with ID starting with 7f106bbb2fd7a91e316f4c3bb7dc08232b3017eae43b85947c47afffb53aa3b4 not found: ID does not exist" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.983603 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-config-data\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.986114 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:31 crc kubenswrapper[4769]: I0122 14:02:31.987831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvzq\" (UniqueName: \"kubernetes.io/projected/f66670ed-ef72-4a45-be6e-add4b5f52f94-kube-api-access-ggvzq\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:31.990536 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f66670ed-ef72-4a45-be6e-add4b5f52f94-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f66670ed-ef72-4a45-be6e-add4b5f52f94\") " pod="openstack/cinder-api-0" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.097297 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.104734 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5765d95c66-48prv"] Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.106219 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.108183 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.113958 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.138126 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5765d95c66-48prv"] Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.273782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-combined-ca-bundle\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274565 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274640 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-internal-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274752 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data-custom\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274871 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a5cf33-efc2-4ca4-93cf-c397436588cb-logs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274925 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-public-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.274968 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b64t\" (UniqueName: \"kubernetes.io/projected/95a5cf33-efc2-4ca4-93cf-c397436588cb-kube-api-access-8b64t\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data-custom\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377627 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a5cf33-efc2-4ca4-93cf-c397436588cb-logs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-public-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377749 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b64t\" (UniqueName: \"kubernetes.io/projected/95a5cf33-efc2-4ca4-93cf-c397436588cb-kube-api-access-8b64t\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.377915 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-combined-ca-bundle\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.378056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.378148 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-internal-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.380692 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a5cf33-efc2-4ca4-93cf-c397436588cb-logs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.393903 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-public-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.394338 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-internal-tls-certs\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.394836 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-combined-ca-bundle\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.395713 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data-custom\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.401080 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b64t\" (UniqueName: \"kubernetes.io/projected/95a5cf33-efc2-4ca4-93cf-c397436588cb-kube-api-access-8b64t\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.414027 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a5cf33-efc2-4ca4-93cf-c397436588cb-config-data\") pod \"barbican-api-5765d95c66-48prv\" (UID: \"95a5cf33-efc2-4ca4-93cf-c397436588cb\") " pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.498309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.593028 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 22 14:02:32 crc kubenswrapper[4769]: W0122 14:02:32.595845 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf66670ed_ef72_4a45_be6e_add4b5f52f94.slice/crio-7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea WatchSource:0}: Error finding container 7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea: Status 404 returned error can't find the container with id 7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.682843 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c"} Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.688357 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66670ed-ef72-4a45-be6e-add4b5f52f94","Type":"ContainerStarted","Data":"7749fde25a07524950cf875b05364639b8258e7c79918d486ae792e9819e28ea"} Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.717655 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.894926 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1" path="/var/lib/kubelet/pods/c8f3fb6e-9a7c-4d5e-9cc4-b5f39361c4b1/volumes" Jan 22 14:02:32 crc kubenswrapper[4769]: I0122 14:02:32.896002 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e24dd8-a4f7-4190-a34a-e1d3e92589e5" path="/var/lib/kubelet/pods/e5e24dd8-a4f7-4190-a34a-e1d3e92589e5/volumes" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.045576 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.054549 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5765d95c66-48prv"] Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.712939 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerStarted","Data":"0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.713344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.727190 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66670ed-ef72-4a45-be6e-add4b5f52f94","Type":"ContainerStarted","Data":"a39e387f5cb6796bd5245099577a041d4535330335336f500889fa062380c528"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.743633 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.007732393 podStartE2EDuration="6.743608251s" podCreationTimestamp="2026-01-22 14:02:27 +0000 UTC" firstStartedPulling="2026-01-22 14:02:29.420452808 +0000 UTC m=+1128.831562727" lastFinishedPulling="2026-01-22 14:02:33.156328656 +0000 UTC m=+1132.567438585" observedRunningTime="2026-01-22 14:02:33.735533561 +0000 UTC m=+1133.146643500" watchObservedRunningTime="2026-01-22 14:02:33.743608251 +0000 UTC m=+1133.154718180" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746404 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5765d95c66-48prv" event={"ID":"95a5cf33-efc2-4ca4-93cf-c397436588cb","Type":"ContainerStarted","Data":"2754cbe24003d84d5d8ab18a809cd82431ec14af97d42ed25eaba73bf5c21e5d"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746465 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5765d95c66-48prv" event={"ID":"95a5cf33-efc2-4ca4-93cf-c397436588cb","Type":"ContainerStarted","Data":"0ab896500ec150c8bf3bca58b8d802dfbbe37af095176eabc94f4c8827641c93"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746479 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5765d95c66-48prv" event={"ID":"95a5cf33-efc2-4ca4-93cf-c397436588cb","Type":"ContainerStarted","Data":"d272230a7a62bb7d6abfae5a7ba1a9c5070f1e0d62a268cba218e9edc3a00fb2"} Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746865 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.746890 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:33 crc kubenswrapper[4769]: I0122 14:02:33.784515 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5765d95c66-48prv" podStartSLOduration=1.7844932390000001 podStartE2EDuration="1.784493239s" podCreationTimestamp="2026-01-22 14:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:33.778872087 +0000 UTC m=+1133.189982026" watchObservedRunningTime="2026-01-22 14:02:33.784493239 +0000 UTC m=+1133.195603198" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.640434 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7cc4c8d8bd-69kmb" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.744910 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.747898 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f66670ed-ef72-4a45-be6e-add4b5f52f94","Type":"ContainerStarted","Data":"087ace9ba575af2007578e297c79bfb3494a65af68fe7fbcd4c9a7bfe7e38a7a"} Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758115 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758221 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" containerID="cri-o://b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" gracePeriod=30 Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.758911 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" containerID="cri-o://dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" gracePeriod=30 Jan 22 14:02:34 crc kubenswrapper[4769]: I0122 14:02:34.811641 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.811616 podStartE2EDuration="3.811616s" podCreationTimestamp="2026-01-22 14:02:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:34.799341187 +0000 UTC m=+1134.210451116" watchObservedRunningTime="2026-01-22 14:02:34.811616 +0000 UTC m=+1134.222725929" Jan 22 14:02:35 crc kubenswrapper[4769]: I0122 14:02:35.767004 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.014962 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.102778 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.188683 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.188947 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" containerID="cri-o://83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" gracePeriod=10 Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.729130 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.776816 4769 generic.go:334] "Generic (PLEG): container finished" podID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" exitCode=0 Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.776903 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerDied","Data":"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a"} Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.777007 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.777042 4769 scope.go:117] "RemoveContainer" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.778662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-8bcps" event={"ID":"09f60324-cca8-4988-bf9b-6967d2bfe9f6","Type":"ContainerDied","Data":"de08ee3bddd1437f1405dc62dcd35ee86837e2196876742c81be83ac8aaa6642"} Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.816541 4769 scope.go:117] "RemoveContainer" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.819135 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.851135 4769 scope.go:117] "RemoveContainer" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" Jan 22 14:02:36 crc kubenswrapper[4769]: E0122 14:02:36.851605 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a\": container with ID starting with 83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a not found: ID does not exist" containerID="83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.851647 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a"} err="failed to get container status \"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a\": rpc error: code = NotFound desc = could not find container \"83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a\": container with ID starting with 83d081c8a21e75cf1863029740b353ffa7a1f8816c42743784431ac4247f119a not found: ID does not exist" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.851674 4769 scope.go:117] "RemoveContainer" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" Jan 22 14:02:36 crc kubenswrapper[4769]: E0122 14:02:36.852013 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d\": container with ID starting with 5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d not found: ID does not exist" containerID="5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.852055 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d"} err="failed to get container status \"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d\": rpc error: code = NotFound desc = could not find container \"5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d\": container with ID starting with 5cdf9c7a0103441af1fab3d20ca2ba561f800dd384d01d55e05efe9b94bef65d not found: ID does not exist" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.877551 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.877633 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.877733 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.878630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.878658 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.878786 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") pod \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\" (UID: \"09f60324-cca8-4988-bf9b-6967d2bfe9f6\") " Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.890618 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9" (OuterVolumeSpecName: "kube-api-access-w9lf9") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "kube-api-access-w9lf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.932039 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config" (OuterVolumeSpecName: "config") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.935315 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.937713 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.952567 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.955742 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "09f60324-cca8-4988-bf9b-6967d2bfe9f6" (UID: "09f60324-cca8-4988-bf9b-6967d2bfe9f6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981011 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981044 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9lf9\" (UniqueName: \"kubernetes.io/projected/09f60324-cca8-4988-bf9b-6967d2bfe9f6-kube-api-access-w9lf9\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981056 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981105 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981118 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:36 crc kubenswrapper[4769]: I0122 14:02:36.981125 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09f60324-cca8-4988-bf9b-6967d2bfe9f6-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.071801 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.134862 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.139675 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-8bcps"] Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.293320 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.788390 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" containerID="cri-o://3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" gracePeriod=30 Jan 22 14:02:37 crc kubenswrapper[4769]: I0122 14:02:37.789017 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" containerID="cri-o://b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" gracePeriod=30 Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.810905 4769 generic.go:334] "Generic (PLEG): container finished" podID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" exitCode=0 Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.811011 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerDied","Data":"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a"} Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.855742 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-d8d684bc6-pmxwh" Jan 22 14:02:38 crc kubenswrapper[4769]: I0122 14:02:38.895837 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" path="/var/lib/kubelet/pods/09f60324-cca8-4988-bf9b-6967d2bfe9f6/volumes" Jan 22 14:02:39 crc kubenswrapper[4769]: I0122 14:02:39.843591 4769 generic.go:334] "Generic (PLEG): container finished" podID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" exitCode=0 Jan 22 14:02:39 crc kubenswrapper[4769]: I0122 14:02:39.843816 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerDied","Data":"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74"} Jan 22 14:02:40 crc kubenswrapper[4769]: I0122 14:02:40.460784 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:40 crc kubenswrapper[4769]: I0122 14:02:40.481541 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:02:40 crc kubenswrapper[4769]: I0122 14:02:40.481607 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.171743 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 22 14:02:41 crc kubenswrapper[4769]: E0122 14:02:41.172413 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.172427 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" Jan 22 14:02:41 crc kubenswrapper[4769]: E0122 14:02:41.172449 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="init" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.172455 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="init" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.172625 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f60324-cca8-4988-bf9b-6967d2bfe9f6" containerName="dnsmasq-dns" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.173226 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.179993 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.180105 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mtjrf" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.180326 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.190463 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config-secret\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286353 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.286435 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg4tn\" (UniqueName: \"kubernetes.io/projected/a46459a9-7fab-439c-95fe-5d6cdcb16997-kube-api-access-kg4tn\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config-secret\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.388341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg4tn\" (UniqueName: \"kubernetes.io/projected/a46459a9-7fab-439c-95fe-5d6cdcb16997-kube-api-access-kg4tn\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.389552 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.394488 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.395169 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a46459a9-7fab-439c-95fe-5d6cdcb16997-openstack-config-secret\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.410397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg4tn\" (UniqueName: \"kubernetes.io/projected/a46459a9-7fab-439c-95fe-5d6cdcb16997-kube-api-access-kg4tn\") pod \"openstackclient\" (UID: \"a46459a9-7fab-439c-95fe-5d6cdcb16997\") " pod="openstack/openstackclient" Jan 22 14:02:41 crc kubenswrapper[4769]: I0122 14:02:41.495779 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.116236 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 22 14:02:42 crc kubenswrapper[4769]: W0122 14:02:42.120340 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda46459a9_7fab_439c_95fe_5d6cdcb16997.slice/crio-a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939 WatchSource:0}: Error finding container a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939: Status 404 returned error can't find the container with id a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939 Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.321675 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423357 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423419 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423486 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423571 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.423670 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") pod \"4383579e-af20-4ae8-89f7-bdaf6480881a\" (UID: \"4383579e-af20-4ae8-89f7-bdaf6480881a\") " Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.424025 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4383579e-af20-4ae8-89f7-bdaf6480881a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.431966 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts" (OuterVolumeSpecName: "scripts") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.434100 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.437621 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx" (OuterVolumeSpecName: "kube-api-access-462mx") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "kube-api-access-462mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.516095 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526011 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526040 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462mx\" (UniqueName: \"kubernetes.io/projected/4383579e-af20-4ae8-89f7-bdaf6480881a-kube-api-access-462mx\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526050 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.526061 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.563920 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data" (OuterVolumeSpecName: "config-data") pod "4383579e-af20-4ae8-89f7-bdaf6480881a" (UID: "4383579e-af20-4ae8-89f7-bdaf6480881a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.628129 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4383579e-af20-4ae8-89f7-bdaf6480881a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935023 4769 generic.go:334] "Generic (PLEG): container finished" podID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" exitCode=0 Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935110 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerDied","Data":"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32"} Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4383579e-af20-4ae8-89f7-bdaf6480881a","Type":"ContainerDied","Data":"f9d86078c4b4a242efcd83eab3552c5360368cb84cb5844f47a02e8a76d0befc"} Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935141 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.935494 4769 scope.go:117] "RemoveContainer" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.941609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a46459a9-7fab-439c-95fe-5d6cdcb16997","Type":"ContainerStarted","Data":"a356f107447d745c0f7b15bc272f1ebc4dde1957fc214b6a088fa2276d888939"} Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.968589 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.977280 4769 scope.go:117] "RemoveContainer" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" Jan 22 14:02:42 crc kubenswrapper[4769]: I0122 14:02:42.987930 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.006864 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.007312 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007329 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.007346 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007356 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007554 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="cinder-scheduler" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.007580 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" containerName="probe" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.008533 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.016268 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.024034 4769 scope.go:117] "RemoveContainer" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.024399 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.028264 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74\": container with ID starting with b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74 not found: ID does not exist" containerID="b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.028318 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74"} err="failed to get container status \"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74\": rpc error: code = NotFound desc = could not find container \"b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74\": container with ID starting with b3dca9a61e5a77a6229ecdcd9e48901971abfbd1767813a6cd35dba0f4aaac74 not found: ID does not exist" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.028352 4769 scope.go:117] "RemoveContainer" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" Jan 22 14:02:43 crc kubenswrapper[4769]: E0122 14:02:43.032351 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32\": container with ID starting with 3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32 not found: ID does not exist" containerID="3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.032388 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32"} err="failed to get container status \"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32\": rpc error: code = NotFound desc = could not find container \"3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32\": container with ID starting with 3354e5732aafcb263d3676fad9ee3df3cbabafc6bd7029cbe04efa83053a2c32 not found: ID does not exist" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142203 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142358 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142478 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142527 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142568 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.142613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9kd\" (UniqueName: \"kubernetes.io/projected/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-kube-api-access-zm9kd\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244477 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244518 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244573 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9kd\" (UniqueName: \"kubernetes.io/projected/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-kube-api-access-zm9kd\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244754 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.244962 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.249345 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.256612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.256996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-scripts\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.257433 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.288279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9kd\" (UniqueName: \"kubernetes.io/projected/4552f275-d56c-4f3d-a8fd-7e5c4e2da02e-kube-api-access-zm9kd\") pod \"cinder-scheduler-0\" (UID: \"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e\") " pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.340771 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.906213 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 22 14:02:43 crc kubenswrapper[4769]: W0122 14:02:43.909370 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4552f275_d56c_4f3d_a8fd_7e5c4e2da02e.slice/crio-a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802 WatchSource:0}: Error finding container a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802: Status 404 returned error can't find the container with id a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802 Jan 22 14:02:43 crc kubenswrapper[4769]: I0122 14:02:43.952224 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e","Type":"ContainerStarted","Data":"a6147300d28d15144c2a6cbdc364d1b893206a6a3a264aa8705d121faf758802"} Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.561281 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.600070 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.734663 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5765d95c66-48prv" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.816303 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.816773 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" containerID="cri-o://04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" gracePeriod=30 Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.817426 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" containerID="cri-o://d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" gracePeriod=30 Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.921285 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4383579e-af20-4ae8-89f7-bdaf6480881a" path="/var/lib/kubelet/pods/4383579e-af20-4ae8-89f7-bdaf6480881a/volumes" Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.996307 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e","Type":"ContainerStarted","Data":"504d239ec4be194cb42134b743bbbccfa90e53d23b1ba970b9ad6cf450ba4478"} Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.998314 4769 generic.go:334] "Generic (PLEG): container finished" podID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" exitCode=143 Jan 22 14:02:44 crc kubenswrapper[4769]: I0122 14:02:44.999201 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerDied","Data":"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908"} Jan 22 14:02:45 crc kubenswrapper[4769]: I0122 14:02:45.563095 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:46 crc kubenswrapper[4769]: I0122 14:02:46.010350 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4552f275-d56c-4f3d-a8fd-7e5c4e2da02e","Type":"ContainerStarted","Data":"86ade0dfcf9afcd576932a25c11fa146cc4582a1aad43558d46829daa678ba95"} Jan 22 14:02:46 crc kubenswrapper[4769]: I0122 14:02:46.039354 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.039331659 podStartE2EDuration="4.039331659s" podCreationTimestamp="2026-01-22 14:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:46.029809311 +0000 UTC m=+1145.440919240" watchObservedRunningTime="2026-01-22 14:02:46.039331659 +0000 UTC m=+1145.450441588" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.657275 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-576cb8587-7cl26"] Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.659573 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.666703 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.666730 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.666730 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.670519 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-576cb8587-7cl26"] Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.700643 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d6bcd56b9-2hx4m" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.764007 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.764258 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ffdb95bfd-x5vfj" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" containerID="cri-o://c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79" gracePeriod=30 Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.764734 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7ffdb95bfd-x5vfj" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" containerID="cri-o://1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0" gracePeriod=30 Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848704 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tps\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-kube-api-access-n6tps\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848813 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-etc-swift\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848867 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-internal-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-config-data\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.848950 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-combined-ca-bundle\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.849061 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-run-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.849131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-log-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.849184 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-public-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950269 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-etc-swift\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950327 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-internal-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950351 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-config-data\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950381 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-combined-ca-bundle\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950430 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-run-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950477 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-log-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-public-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.950540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6tps\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-kube-api-access-n6tps\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.952247 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-run-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.954308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75afafe2-c784-45fa-8104-1115c8921138-log-httpd\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.956445 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-combined-ca-bundle\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.957117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-config-data\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.960534 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-public-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.960625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75afafe2-c784-45fa-8104-1115c8921138-internal-tls-certs\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.960963 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-etc-swift\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.974703 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6tps\" (UniqueName: \"kubernetes.io/projected/75afafe2-c784-45fa-8104-1115c8921138-kube-api-access-n6tps\") pod \"swift-proxy-576cb8587-7cl26\" (UID: \"75afafe2-c784-45fa-8104-1115c8921138\") " pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:47 crc kubenswrapper[4769]: I0122 14:02:47.983186 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.018483 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41048->10.217.0.161:9311: read: connection reset by peer" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.018518 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:41064->10.217.0.161:9311: read: connection reset by peer" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.050281 4769 generic.go:334] "Generic (PLEG): container finished" podID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerID="1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0" exitCode=0 Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.050324 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerDied","Data":"1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0"} Jan 22 14:02:48 crc kubenswrapper[4769]: E0122 14:02:48.190338 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaad9379_b67a_4b3a_8cc9_f37d9ad425e8.slice/crio-d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20.scope\": RecentStats: unable to find data in memory cache]" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.341868 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.507397 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665306 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665377 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665433 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665465 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.665599 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") pod \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\" (UID: \"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8\") " Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.670990 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs" (OuterVolumeSpecName: "logs") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.674036 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m" (OuterVolumeSpecName: "kube-api-access-br56m") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "kube-api-access-br56m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.676489 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.696438 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.733520 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data" (OuterVolumeSpecName: "config-data") pod "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" (UID: "eaad9379-b67a-4b3a-8cc9-f37d9ad425e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.739539 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-576cb8587-7cl26"] Jan 22 14:02:48 crc kubenswrapper[4769]: W0122 14:02:48.746029 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75afafe2_c784_45fa_8104_1115c8921138.slice/crio-e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2 WatchSource:0}: Error finding container e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2: Status 404 returned error can't find the container with id e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2 Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767636 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767670 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br56m\" (UniqueName: \"kubernetes.io/projected/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-kube-api-access-br56m\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767683 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767692 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:48 crc kubenswrapper[4769]: I0122 14:02:48.767701 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.067847 4769 generic.go:334] "Generic (PLEG): container finished" podID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" exitCode=0 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068016 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068458 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerDied","Data":"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20"} Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068517 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6bc9c49fb8-n7dm2" event={"ID":"eaad9379-b67a-4b3a-8cc9-f37d9ad425e8","Type":"ContainerDied","Data":"9af8e79839bd151effc1aa29a1d456de2993b92396c6ddf4772fc15ecf95323b"} Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.068537 4769 scope.go:117] "RemoveContainer" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.075772 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-576cb8587-7cl26" event={"ID":"75afafe2-c784-45fa-8104-1115c8921138","Type":"ContainerStarted","Data":"e856f9a5d1e8ef92da77622b170cc2bf367179d2476b441d0d4e4cc36d12e8b2"} Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.098367 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.107664 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6bc9c49fb8-n7dm2"] Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.846615 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848165 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" containerID="cri-o://3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848202 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" containerID="cri-o://b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848196 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" containerID="cri-o://0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.848300 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" containerID="cri-o://81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223" gracePeriod=30 Jan 22 14:02:49 crc kubenswrapper[4769]: I0122 14:02:49.863429 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090097 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3" exitCode=0 Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090475 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c" exitCode=2 Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090240 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3"} Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.090527 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c"} Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.460671 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:02:50 crc kubenswrapper[4769]: I0122 14:02:50.896158 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" path="/var/lib/kubelet/pods/eaad9379-b67a-4b3a-8cc9-f37d9ad425e8/volumes" Jan 22 14:02:51 crc kubenswrapper[4769]: I0122 14:02:51.115509 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31" exitCode=0 Jan 22 14:02:51 crc kubenswrapper[4769]: I0122 14:02:51.115557 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31"} Jan 22 14:02:52 crc kubenswrapper[4769]: I0122 14:02:52.530561 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:52 crc kubenswrapper[4769]: I0122 14:02:52.531043 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" containerID="cri-o://df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee" gracePeriod=30 Jan 22 14:02:52 crc kubenswrapper[4769]: I0122 14:02:52.531112 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" containerID="cri-o://42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f" gracePeriod=30 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.137674 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerID="df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee" exitCode=143 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.137745 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerDied","Data":"df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee"} Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.145091 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223"} Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.145160 4769 generic.go:334] "Generic (PLEG): container finished" podID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerID="81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223" exitCode=0 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.148223 4769 generic.go:334] "Generic (PLEG): container finished" podID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerID="c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79" exitCode=0 Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.148258 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerDied","Data":"c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79"} Jan 22 14:02:53 crc kubenswrapper[4769]: I0122 14:02:53.545965 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.344412 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:02:54 crc kubenswrapper[4769]: E0122 14:02:54.344894 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.344918 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" Jan 22 14:02:54 crc kubenswrapper[4769]: E0122 14:02:54.344934 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.344942 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.345153 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api-log" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.345174 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaad9379-b67a-4b3a-8cc9-f37d9ad425e8" containerName="barbican-api" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.345890 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.394508 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.443693 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.447118 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.469639 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.490875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.491266 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.554241 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.555598 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.555717 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.559279 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.593144 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.593924 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.594102 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.594148 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.594293 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.624283 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"nova-api-db-create-tx7mp\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.663129 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.665899 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.672872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.689272 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.691386 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695490 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695559 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695683 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.695721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.696619 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.697925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.723113 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"nova-cell0-db-create-5t26t\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.732406 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.733107 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.767587 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797902 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797969 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.797997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.798088 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.798447 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.798749 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.833165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"nova-api-264d-account-create-update-4z8cb\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.871212 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.872538 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.874818 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.878955 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901420 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901528 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901665 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.901747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.902191 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.903407 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.911755 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.918648 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.918947 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" containerID="cri-o://938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40" gracePeriod=30 Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.919461 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" containerID="cri-o://a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07" gracePeriod=30 Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.927925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"nova-cell1-db-create-fllmn\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.950928 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"nova-cell0-49d8-account-create-update-gnbhc\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:54 crc kubenswrapper[4769]: I0122 14:02:54.995031 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.003382 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.003497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.017806 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.104659 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.104787 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.105520 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.121634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"nova-cell1-ddb8-account-create-update-zm48k\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.171124 4769 generic.go:334] "Generic (PLEG): container finished" podID="49bcd071-b172-4180-996d-a8494ce80ab7" containerID="938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40" exitCode=143 Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.171161 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerDied","Data":"938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40"} Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.204811 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.486461 4769 scope.go:117] "RemoveContainer" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.622381 4769 scope.go:117] "RemoveContainer" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" Jan 22 14:02:55 crc kubenswrapper[4769]: E0122 14:02:55.622989 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20\": container with ID starting with d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20 not found: ID does not exist" containerID="d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.623024 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20"} err="failed to get container status \"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20\": rpc error: code = NotFound desc = could not find container \"d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20\": container with ID starting with d6a865911489b9a1028413866f392612dc71ad5cc1fae59e38104d4f68999e20 not found: ID does not exist" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.623051 4769 scope.go:117] "RemoveContainer" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" Jan 22 14:02:55 crc kubenswrapper[4769]: E0122 14:02:55.623557 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908\": container with ID starting with 04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908 not found: ID does not exist" containerID="04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.623593 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908"} err="failed to get container status \"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908\": rpc error: code = NotFound desc = could not find container \"04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908\": container with ID starting with 04c4a8706de1fbb034493ccbd107bf586baaf531c480261c94f054acfee6f908 not found: ID does not exist" Jan 22 14:02:55 crc kubenswrapper[4769]: I0122 14:02:55.872937 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.103829 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.197032 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fllmn" event={"ID":"ecb8a996-384c-4155-b45d-6a6335165545","Type":"ContainerStarted","Data":"33d960cc92853c91418decd1c1e81af16c036144d8e551ab31b77730864076c3"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.204600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-576cb8587-7cl26" event={"ID":"75afafe2-c784-45fa-8104-1115c8921138","Type":"ContainerStarted","Data":"c1234bb42d52ecd3fa353dab10a5ae2fa88e278117102689bcadb087bebbc3a7"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.205842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-264d-account-create-update-4z8cb" event={"ID":"fe68065a-9702-4440-a09a-2698d21ad5cc","Type":"ContainerStarted","Data":"fb03596a8742e0abb8ca676e233fe992f1bbc203ca0cae509c668afd4e7766aa"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.209265 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerID="42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f" exitCode=0 Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.209294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerDied","Data":"42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f"} Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.470580 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.566984 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.575062 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.590149 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.609783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.639580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.639744 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.639816 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.640480 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.640541 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") pod \"0783e518-6a8e-43a3-9b33-4d0710f958f6\" (UID: \"0783e518-6a8e-43a3-9b33-4d0710f958f6\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.644043 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.672031 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z" (OuterVolumeSpecName: "kube-api-access-jtk6z") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "kube-api-access-jtk6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.680680 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742077 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742143 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742165 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742243 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742400 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742727 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.742739 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtk6z\" (UniqueName: \"kubernetes.io/projected/0783e518-6a8e-43a3-9b33-4d0710f958f6-kube-api-access-jtk6z\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.750124 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.751902 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.778094 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7" (OuterVolumeSpecName: "kube-api-access-l6zn7") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "kube-api-access-l6zn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.781763 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts" (OuterVolumeSpecName: "scripts") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.843332 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.843609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") pod \"e12c3fd8-b199-4dbb-8022-ea1997362b45\" (UID: \"e12c3fd8-b199-4dbb-8022-ea1997362b45\") " Jan 22 14:02:56 crc kubenswrapper[4769]: W0122 14:02:56.843879 4769 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e12c3fd8-b199-4dbb-8022-ea1997362b45/volumes/kubernetes.io~secret/sg-core-conf-yaml Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.843985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844645 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844665 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844675 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844685 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e12c3fd8-b199-4dbb-8022-ea1997362b45-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844693 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6zn7\" (UniqueName: \"kubernetes.io/projected/e12c3fd8-b199-4dbb-8022-ea1997362b45-kube-api-access-l6zn7\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.844985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config" (OuterVolumeSpecName: "config") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.857057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.872749 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0783e518-6a8e-43a3-9b33-4d0710f958f6" (UID: "0783e518-6a8e-43a3-9b33-4d0710f958f6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.910191 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data" (OuterVolumeSpecName: "config-data") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.943538 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e12c3fd8-b199-4dbb-8022-ea1997362b45" (UID: "e12c3fd8-b199-4dbb-8022-ea1997362b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948930 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948965 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948978 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948987 4769 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0783e518-6a8e-43a3-9b33-4d0710f958f6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:56 crc kubenswrapper[4769]: I0122 14:02:56.948995 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e12c3fd8-b199-4dbb-8022-ea1997362b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.098904 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.226426 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7ffdb95bfd-x5vfj" event={"ID":"0783e518-6a8e-43a3-9b33-4d0710f958f6","Type":"ContainerDied","Data":"7728df5824bdc02cf7f433c8c65dbea0209e0b45bf371c7fd3ff2a02c06db9ef"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.226439 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7ffdb95bfd-x5vfj" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.226480 4769 scope.go:117] "RemoveContainer" containerID="1a3f324f9c10250340c90b3fa9891a5895621c3821c7c74ce5c3074476e207b0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.229750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tx7mp" event={"ID":"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce","Type":"ContainerStarted","Data":"9b721a5f2a54f7e10b9d6313d093c22bf6e06ca26d653a2b9eddb1cde91b429e"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.244639 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-576cb8587-7cl26" event={"ID":"75afafe2-c784-45fa-8104-1115c8921138","Type":"ContainerStarted","Data":"07a0d1ba9cf45b0092b37dd1c4795758a1430bdbc3cc5c2cd6708ce728099eba"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.245018 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.245039 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256056 4769 generic.go:334] "Generic (PLEG): container finished" podID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerID="751475c8a4f373e18f772a466e3903901a4fe7bb3bad0aaf09ffde9f52db0d97" exitCode=0 Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256158 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-264d-account-create-update-4z8cb" event={"ID":"fe68065a-9702-4440-a09a-2698d21ad5cc","Type":"ContainerDied","Data":"751475c8a4f373e18f772a466e3903901a4fe7bb3bad0aaf09ffde9f52db0d97"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256392 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256495 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256545 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256582 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256603 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256665 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.256837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") pod \"dab0b9a4-13fb-42b5-be06-1231f96c4016\" (UID: \"dab0b9a4-13fb-42b5-be06-1231f96c4016\") " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257250 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257279 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs" (OuterVolumeSpecName: "logs") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257626 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.257650 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dab0b9a4-13fb-42b5-be06-1231f96c4016-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.282941 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dab0b9a4-13fb-42b5-be06-1231f96c4016","Type":"ContainerDied","Data":"d5ec275ecffbb843da730d80b73f7a952b5598fd63f1b7fb5564a3c77534d9ce"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.283029 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.289705 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr" (OuterVolumeSpecName: "kube-api-access-c2ptr") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "kube-api-access-c2ptr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.291815 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-576cb8587-7cl26" podStartSLOduration=10.291775929 podStartE2EDuration="10.291775929s" podCreationTimestamp="2026-01-22 14:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:02:57.277967684 +0000 UTC m=+1156.689077623" watchObservedRunningTime="2026-01-22 14:02:57.291775929 +0000 UTC m=+1156.702885858" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.298384 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts" (OuterVolumeSpecName: "scripts") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.302642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.303002 4769 generic.go:334] "Generic (PLEG): container finished" podID="ecb8a996-384c-4155-b45d-6a6335165545" containerID="be7b8f38b3fcc55abca045ec63342b69733efd9d1dc30413ccf64f860152d0b1" exitCode=0 Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.303094 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fllmn" event={"ID":"ecb8a996-384c-4155-b45d-6a6335165545","Type":"ContainerDied","Data":"be7b8f38b3fcc55abca045ec63342b69733efd9d1dc30413ccf64f860152d0b1"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.305811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" event={"ID":"b33b7a35-52b8-47c6-b5a7-5cf87d838927","Type":"ContainerStarted","Data":"3bde0705d34c87d4eabfe7fb123b426bb1c060e1a93c38781b2d5073620c51be"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.314616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" event={"ID":"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d","Type":"ContainerStarted","Data":"08b0b5abfe60f5c3c4d81e0794fb73d02949bc2843159af9976a8ea288ce36e5"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.319268 4769 scope.go:117] "RemoveContainer" containerID="c85ede29f7444218742a32b8c6ee6ce640aed0f91c712213650abe7455210e79" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.335609 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.337130 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5t26t" event={"ID":"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17","Type":"ContainerStarted","Data":"c1e8dfd11532902b9aba6d45844dcf3a73a1816450e5c693654fc410ab3cb953"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.349818 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7ffdb95bfd-x5vfj"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.351303 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e12c3fd8-b199-4dbb-8022-ea1997362b45","Type":"ContainerDied","Data":"6847e6f717a917e8f33fe5f7732739ecc0907695151d12527fc0722d9980fff4"} Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.351453 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.361290 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.361332 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.361344 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2ptr\" (UniqueName: \"kubernetes.io/projected/dab0b9a4-13fb-42b5-be06-1231f96c4016-kube-api-access-c2ptr\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.370538 4769 scope.go:117] "RemoveContainer" containerID="42b650e1bb6392891cc6da4a8a010ef12200563d87973891cd250c5a4e408d2f" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.423659 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.431568 4769 scope.go:117] "RemoveContainer" containerID="df6c3aec1d93c8e3b135e0f0f09265bd6003dda7e97e74ba5f9864130b43bcee" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.431701 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.440091 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.440604 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441415 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.441530 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441635 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.441711 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441773 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.441869 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.441942 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.442015 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.442106 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.442218 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.442692 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.442834 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443022 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" Jan 22 14:02:57 crc kubenswrapper[4769]: E0122 14:02:57.443122 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443203 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443532 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="proxy-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.443691 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-notification-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444270 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="ceilometer-central-agent" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444407 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-api" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444522 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" containerName="sg-core" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444627 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-log" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444743 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" containerName="glance-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.444865 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" containerName="neutron-httpd" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.449482 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.451985 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.454122 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.455902 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.476650 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.484755 4769 scope.go:117] "RemoveContainer" containerID="0bf74afc1bd09f3d8c6303b0e19d9074d9577290bb273a6f32a45d4dcae632a3" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.518040 4769 scope.go:117] "RemoveContainer" containerID="b5ee5434348cf923fba435a2559a5a264053474440f8130af21c2d5bd4b2a22c" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.518783 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.552217 4769 scope.go:117] "RemoveContainer" containerID="81b00fa0cdcc67e791a9afbc3e7519246869d2324e6cda565a71161bcb2fc223" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.563343 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data" (OuterVolumeSpecName: "config-data") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dab0b9a4-13fb-42b5-be06-1231f96c4016" (UID: "dab0b9a4-13fb-42b5-be06-1231f96c4016"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565612 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565663 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565690 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565714 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565732 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565841 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565928 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565942 4769 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565956 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.565968 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab0b9a4-13fb-42b5-be06-1231f96c4016-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.611251 4769 scope.go:117] "RemoveContainer" containerID="3f046e94cf581905bfb412cafcc0aba6ed78f4b25c54f79b4edd2b0575beed31" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.670327 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671346 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671613 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671784 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.671880 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.675635 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.677619 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.678094 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.688163 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.695391 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.697497 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"ceilometer-0\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.738724 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.740460 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.743187 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.743359 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.773280 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.780128 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874642 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874723 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.874805 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875048 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875069 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-logs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875087 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.875135 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ffjw\" (UniqueName: \"kubernetes.io/projected/6e1405ea-42cd-4345-b44a-8e72350a3357-kube-api-access-9ffjw\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977412 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ffjw\" (UniqueName: \"kubernetes.io/projected/6e1405ea-42cd-4345-b44a-8e72350a3357-kube-api-access-9ffjw\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977819 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977871 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.977991 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.978077 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.978122 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-logs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.978146 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.980844 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.981309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e1405ea-42cd-4345-b44a-8e72350a3357-logs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.981938 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.984802 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.985355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.987137 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:57 crc kubenswrapper[4769]: I0122 14:02:57.990587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1405ea-42cd-4345-b44a-8e72350a3357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.003953 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ffjw\" (UniqueName: \"kubernetes.io/projected/6e1405ea-42cd-4345-b44a-8e72350a3357-kube-api-access-9ffjw\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.019658 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"6e1405ea-42cd-4345-b44a-8e72350a3357\") " pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.095900 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.253358 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:58 crc kubenswrapper[4769]: W0122 14:02:58.290247 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11b92673_89ea_4ef5_87f5_743e06fcb861.slice/crio-90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c WatchSource:0}: Error finding container 90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c: Status 404 returned error can't find the container with id 90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.376391 4769 generic.go:334] "Generic (PLEG): container finished" podID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerID="afb16cda8136e3c60a4cc4eee0a34fec39387efd7fcb1e371afcd2d6220a3675" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.376842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tx7mp" event={"ID":"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce","Type":"ContainerDied","Data":"afb16cda8136e3c60a4cc4eee0a34fec39387efd7fcb1e371afcd2d6220a3675"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.381127 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a46459a9-7fab-439c-95fe-5d6cdcb16997","Type":"ContainerStarted","Data":"041dbb0cf121e394f1c409f34093072bd77aeb78a757dac85ac4af70442e6978"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.392532 4769 generic.go:334] "Generic (PLEG): container finished" podID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerID="35419b0caadf70dae858a9997b2843ac8c049f423da3e9c017409f33d3f2290e" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.392611 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" event={"ID":"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d","Type":"ContainerDied","Data":"35419b0caadf70dae858a9997b2843ac8c049f423da3e9c017409f33d3f2290e"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.417891 4769 generic.go:334] "Generic (PLEG): container finished" podID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerID="5bf2e7be98fe42d0c15cb0b41bd3e6c08f22798c04acc10db52946a1a04187f4" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.418117 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5t26t" event={"ID":"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17","Type":"ContainerDied","Data":"5bf2e7be98fe42d0c15cb0b41bd3e6c08f22798c04acc10db52946a1a04187f4"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.423378 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.445472 4769 generic.go:334] "Generic (PLEG): container finished" podID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerID="98cf78384a8d16885b92b730a74a3979d2ab97411451096f63dae1f0143aa7f4" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.445529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" event={"ID":"b33b7a35-52b8-47c6-b5a7-5cf87d838927","Type":"ContainerDied","Data":"98cf78384a8d16885b92b730a74a3979d2ab97411451096f63dae1f0143aa7f4"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.448322 4769 generic.go:334] "Generic (PLEG): container finished" podID="49bcd071-b172-4180-996d-a8494ce80ab7" containerID="a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07" exitCode=0 Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.449087 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerDied","Data":"a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07"} Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.451604 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.272708817 podStartE2EDuration="17.451593168s" podCreationTimestamp="2026-01-22 14:02:41 +0000 UTC" firstStartedPulling="2026-01-22 14:02:42.122811962 +0000 UTC m=+1141.533921891" lastFinishedPulling="2026-01-22 14:02:56.301696313 +0000 UTC m=+1155.712806242" observedRunningTime="2026-01-22 14:02:58.438181264 +0000 UTC m=+1157.849291193" watchObservedRunningTime="2026-01-22 14:02:58.451593168 +0000 UTC m=+1157.862703097" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.620901 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.898639 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0783e518-6a8e-43a3-9b33-4d0710f958f6" path="/var/lib/kubelet/pods/0783e518-6a8e-43a3-9b33-4d0710f958f6/volumes" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.900223 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab0b9a4-13fb-42b5-be06-1231f96c4016" path="/var/lib/kubelet/pods/dab0b9a4-13fb-42b5-be06-1231f96c4016/volumes" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.901944 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e12c3fd8-b199-4dbb-8022-ea1997362b45" path="/var/lib/kubelet/pods/e12c3fd8-b199-4dbb-8022-ea1997362b45/volumes" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.979984 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.991021 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:58 crc kubenswrapper[4769]: I0122 14:02:58.999577 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.022905 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.106626 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") pod \"ecb8a996-384c-4155-b45d-6a6335165545\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107003 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107037 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107107 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107137 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107208 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") pod \"fe68065a-9702-4440-a09a-2698d21ad5cc\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107265 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107323 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") pod \"fe68065a-9702-4440-a09a-2698d21ad5cc\" (UID: \"fe68065a-9702-4440-a09a-2698d21ad5cc\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.107898 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.108260 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe68065a-9702-4440-a09a-2698d21ad5cc" (UID: "fe68065a-9702-4440-a09a-2698d21ad5cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.110274 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs" (OuterVolumeSpecName: "logs") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.114973 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115033 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115067 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd" (OuterVolumeSpecName: "kube-api-access-8rwcd") pod "ecb8a996-384c-4155-b45d-6a6335165545" (UID: "ecb8a996-384c-4155-b45d-6a6335165545"). InnerVolumeSpecName "kube-api-access-8rwcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115102 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") pod \"49bcd071-b172-4180-996d-a8494ce80ab7\" (UID: \"49bcd071-b172-4180-996d-a8494ce80ab7\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") pod \"ecb8a996-384c-4155-b45d-6a6335165545\" (UID: \"ecb8a996-384c-4155-b45d-6a6335165545\") " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115853 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rwcd\" (UniqueName: \"kubernetes.io/projected/ecb8a996-384c-4155-b45d-6a6335165545-kube-api-access-8rwcd\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115881 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115891 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115899 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe68065a-9702-4440-a09a-2698d21ad5cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.115908 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bcd071-b172-4180-996d-a8494ce80ab7-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.117226 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt" (OuterVolumeSpecName: "kube-api-access-4h7rt") pod "fe68065a-9702-4440-a09a-2698d21ad5cc" (UID: "fe68065a-9702-4440-a09a-2698d21ad5cc"). InnerVolumeSpecName "kube-api-access-4h7rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.118066 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ecb8a996-384c-4155-b45d-6a6335165545" (UID: "ecb8a996-384c-4155-b45d-6a6335165545"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.120229 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts" (OuterVolumeSpecName: "scripts") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.120277 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722" (OuterVolumeSpecName: "kube-api-access-tk722") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "kube-api-access-tk722". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.143189 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.162304 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.196706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data" (OuterVolumeSpecName: "config-data") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218102 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218143 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ecb8a996-384c-4155-b45d-6a6335165545-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218156 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk722\" (UniqueName: \"kubernetes.io/projected/49bcd071-b172-4180-996d-a8494ce80ab7-kube-api-access-tk722\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218174 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218188 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218196 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h7rt\" (UniqueName: \"kubernetes.io/projected/fe68065a-9702-4440-a09a-2698d21ad5cc-kube-api-access-4h7rt\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.218204 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.223959 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "49bcd071-b172-4180-996d-a8494ce80ab7" (UID: "49bcd071-b172-4180-996d-a8494ce80ab7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.320056 4769 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49bcd071-b172-4180-996d-a8494ce80ab7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.460047 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e1405ea-42cd-4345-b44a-8e72350a3357","Type":"ContainerStarted","Data":"8ad6347010d6112ee922996a6b2ff35db5d866513c76ddfc4c83fac04ed5249f"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.462943 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-264d-account-create-update-4z8cb" event={"ID":"fe68065a-9702-4440-a09a-2698d21ad5cc","Type":"ContainerDied","Data":"fb03596a8742e0abb8ca676e233fe992f1bbc203ca0cae509c668afd4e7766aa"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.462985 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb03596a8742e0abb8ca676e233fe992f1bbc203ca0cae509c668afd4e7766aa" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.462992 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-264d-account-create-update-4z8cb" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.464839 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.468528 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fllmn" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.468816 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fllmn" event={"ID":"ecb8a996-384c-4155-b45d-6a6335165545","Type":"ContainerDied","Data":"33d960cc92853c91418decd1c1e81af16c036144d8e551ab31b77730864076c3"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.468860 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d960cc92853c91418decd1c1e81af16c036144d8e551ab31b77730864076c3" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.473871 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"49bcd071-b172-4180-996d-a8494ce80ab7","Type":"ContainerDied","Data":"c4bd6d4a50528753ee39f385b25433a38f084b70a487761e402319d168c73922"} Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.473916 4769 scope.go:117] "RemoveContainer" containerID="a2d9a00afd560361b63a4a984016f967c6c70fe342eda3b82ceb9f885d271c07" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.473970 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.539142 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.553807 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.555979 4769 scope.go:117] "RemoveContainer" containerID="938d482072f52ec70bd25d780639f9001b17b5d4e8cfed165c79e03594adbc40" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.563952 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.564650 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.564733 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.564821 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecb8a996-384c-4155-b45d-6a6335165545" containerName="mariadb-database-create" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.564905 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecb8a996-384c-4155-b45d-6a6335165545" containerName="mariadb-database-create" Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.564967 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565022 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" Jan 22 14:02:59 crc kubenswrapper[4769]: E0122 14:02:59.565094 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerName="mariadb-account-create-update" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565146 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerName="mariadb-account-create-update" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565374 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" containerName="mariadb-account-create-update" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565488 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-httpd" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565574 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecb8a996-384c-4155-b45d-6a6335165545" containerName="mariadb-database-create" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.565652 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" containerName="glance-log" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.567693 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.570911 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.571169 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.590379 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731883 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731954 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.731981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732043 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732079 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdr7\" (UniqueName: \"kubernetes.io/projected/adf621f0-a198-4042-93a3-791ed71e1ee3-kube-api-access-fvdr7\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732230 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.732349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862868 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862963 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.862990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdr7\" (UniqueName: \"kubernetes.io/projected/adf621f0-a198-4042-93a3-791ed71e1ee3-kube-api-access-fvdr7\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863043 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863083 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863119 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863576 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.863955 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.865077 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/adf621f0-a198-4042-93a3-791ed71e1ee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.868582 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.868596 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.871572 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.873411 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf621f0-a198-4042-93a3-791ed71e1ee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.893119 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdr7\" (UniqueName: \"kubernetes.io/projected/adf621f0-a198-4042-93a3-791ed71e1ee3-kube-api-access-fvdr7\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:02:59 crc kubenswrapper[4769]: I0122 14:02:59.937211 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"adf621f0-a198-4042-93a3-791ed71e1ee3\") " pod="openstack/glance-default-internal-api-0" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.023476 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.078248 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.100297 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.121753 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169484 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") pod \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169699 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") pod \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\" (UID: \"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169744 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") pod \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.169761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") pod \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\" (UID: \"b33b7a35-52b8-47c6-b5a7-5cf87d838927\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.170777 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b33b7a35-52b8-47c6-b5a7-5cf87d838927" (UID: "b33b7a35-52b8-47c6-b5a7-5cf87d838927"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.170879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" (UID: "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.174324 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l" (OuterVolumeSpecName: "kube-api-access-g9z8l") pod "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" (UID: "288566dc-b78e-46e4-9bd3-c61bc9c2a6ce"). InnerVolumeSpecName "kube-api-access-g9z8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.174869 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9" (OuterVolumeSpecName: "kube-api-access-gw7l9") pod "b33b7a35-52b8-47c6-b5a7-5cf87d838927" (UID: "b33b7a35-52b8-47c6-b5a7-5cf87d838927"). InnerVolumeSpecName "kube-api-access-gw7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.200076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.271656 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") pod \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.271884 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") pod \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\" (UID: \"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.271910 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") pod \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") pod \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\" (UID: \"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17\") " Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272483 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272503 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw7l9\" (UniqueName: \"kubernetes.io/projected/b33b7a35-52b8-47c6-b5a7-5cf87d838927-kube-api-access-gw7l9\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272515 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33b7a35-52b8-47c6-b5a7-5cf87d838927-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.272525 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9z8l\" (UniqueName: \"kubernetes.io/projected/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce-kube-api-access-g9z8l\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.273323 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" (UID: "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.273574 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" (UID: "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.275527 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9" (OuterVolumeSpecName: "kube-api-access-mk2z9") pod "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" (UID: "e45f7c9a-23a2-40fe-80dc-305f1fbc8e17"). InnerVolumeSpecName "kube-api-access-mk2z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.276482 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8" (OuterVolumeSpecName: "kube-api-access-2p7k8") pod "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" (UID: "cdcc2db5-9739-4e49-a6cc-3f7aff70f97d"). InnerVolumeSpecName "kube-api-access-2p7k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.374673 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk2z9\" (UniqueName: \"kubernetes.io/projected/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-kube-api-access-mk2z9\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.375043 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p7k8\" (UniqueName: \"kubernetes.io/projected/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-kube-api-access-2p7k8\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.375054 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.375065 4769 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.461567 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6464b9bcc6-tjgjv" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.461726 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.515509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.520556 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.520567 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ddb8-account-create-update-zm48k" event={"ID":"cdcc2db5-9739-4e49-a6cc-3f7aff70f97d","Type":"ContainerDied","Data":"08b0b5abfe60f5c3c4d81e0794fb73d02949bc2843159af9976a8ea288ce36e5"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.520602 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b0b5abfe60f5c3c4d81e0794fb73d02949bc2843159af9976a8ea288ce36e5" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.526752 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5t26t" event={"ID":"e45f7c9a-23a2-40fe-80dc-305f1fbc8e17","Type":"ContainerDied","Data":"c1e8dfd11532902b9aba6d45844dcf3a73a1816450e5c693654fc410ab3cb953"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.526754 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5t26t" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.526800 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e8dfd11532902b9aba6d45844dcf3a73a1816450e5c693654fc410ab3cb953" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.531689 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tx7mp" event={"ID":"288566dc-b78e-46e4-9bd3-c61bc9c2a6ce","Type":"ContainerDied","Data":"9b721a5f2a54f7e10b9d6313d093c22bf6e06ca26d653a2b9eddb1cde91b429e"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.531729 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b721a5f2a54f7e10b9d6313d093c22bf6e06ca26d653a2b9eddb1cde91b429e" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.531894 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tx7mp" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.537269 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" event={"ID":"b33b7a35-52b8-47c6-b5a7-5cf87d838927","Type":"ContainerDied","Data":"3bde0705d34c87d4eabfe7fb123b426bb1c060e1a93c38781b2d5073620c51be"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.537306 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bde0705d34c87d4eabfe7fb123b426bb1c060e1a93c38781b2d5073620c51be" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.537359 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-49d8-account-create-update-gnbhc" Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.544563 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e1405ea-42cd-4345-b44a-8e72350a3357","Type":"ContainerStarted","Data":"4e7a8c300758f336f2c192ba31db93d9dc1a12401810a6e6dcd30912c6c08140"} Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.778705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 22 14:03:00 crc kubenswrapper[4769]: I0122 14:03:00.939941 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bcd071-b172-4180-996d-a8494ce80ab7" path="/var/lib/kubelet/pods/49bcd071-b172-4180-996d-a8494ce80ab7/volumes" Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.563639 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e1405ea-42cd-4345-b44a-8e72350a3357","Type":"ContainerStarted","Data":"4837b05c7f14955b3fadbc1a6bb3a6669b78714341955303f28396fc19c04de6"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.576363 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adf621f0-a198-4042-93a3-791ed71e1ee3","Type":"ContainerStarted","Data":"c1b94a0de5367741301d88181f6a32ce4effeeab43e55cc22517bb07d983c82c"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.576407 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adf621f0-a198-4042-93a3-791ed71e1ee3","Type":"ContainerStarted","Data":"1ad7ae6160aaee4cfd37c4d02c6de3469c26afd562fb4491a8ca33ec92fca600"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.589432 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} Jan 22 14:03:01 crc kubenswrapper[4769]: I0122 14:03:01.597659 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.597637672 podStartE2EDuration="4.597637672s" podCreationTimestamp="2026-01-22 14:02:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:01.589909443 +0000 UTC m=+1161.001019372" watchObservedRunningTime="2026-01-22 14:03:01.597637672 +0000 UTC m=+1161.008747601" Jan 22 14:03:02 crc kubenswrapper[4769]: I0122 14:03:02.604829 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"adf621f0-a198-4042-93a3-791ed71e1ee3","Type":"ContainerStarted","Data":"6f57ef2050aada2007225172f0c8fe10cb1bf865b0bf6cc5ac57c3ae05313025"} Jan 22 14:03:02 crc kubenswrapper[4769]: I0122 14:03:02.637440 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.637424286 podStartE2EDuration="3.637424286s" podCreationTimestamp="2026-01-22 14:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:02.628879274 +0000 UTC m=+1162.039989213" watchObservedRunningTime="2026-01-22 14:03:02.637424286 +0000 UTC m=+1162.048534215" Jan 22 14:03:02 crc kubenswrapper[4769]: I0122 14:03:02.993101 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.000588 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-576cb8587-7cl26" Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.617938 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerStarted","Data":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618048 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" containerID="cri-o://cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618086 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" containerID="cri-o://8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618457 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618182 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" containerID="cri-o://d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.618103 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" containerID="cri-o://043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" gracePeriod=30 Jan 22 14:03:03 crc kubenswrapper[4769]: I0122 14:03:03.642018 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.479111228 podStartE2EDuration="6.641996485s" podCreationTimestamp="2026-01-22 14:02:57 +0000 UTC" firstStartedPulling="2026-01-22 14:02:58.31081818 +0000 UTC m=+1157.721928119" lastFinishedPulling="2026-01-22 14:03:02.473703447 +0000 UTC m=+1161.884813376" observedRunningTime="2026-01-22 14:03:03.638338756 +0000 UTC m=+1163.049448685" watchObservedRunningTime="2026-01-22 14:03:03.641996485 +0000 UTC m=+1163.053106414" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.324040 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472360 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472425 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472443 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472473 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472505 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472553 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.472620 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") pod \"11b92673-89ea-4ef5-87f5-743e06fcb861\" (UID: \"11b92673-89ea-4ef5-87f5-743e06fcb861\") " Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.473216 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.473574 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.474065 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.478924 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts" (OuterVolumeSpecName: "scripts") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.479254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k" (OuterVolumeSpecName: "kube-api-access-5wn6k") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "kube-api-access-5wn6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.503924 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.548918 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575337 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575380 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wn6k\" (UniqueName: \"kubernetes.io/projected/11b92673-89ea-4ef5-87f5-743e06fcb861-kube-api-access-5wn6k\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575394 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575411 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11b92673-89ea-4ef5-87f5-743e06fcb861-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.575420 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.579079 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data" (OuterVolumeSpecName: "config-data") pod "11b92673-89ea-4ef5-87f5-743e06fcb861" (UID: "11b92673-89ea-4ef5-87f5-743e06fcb861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627815 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" exitCode=0 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627852 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" exitCode=2 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627866 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" exitCode=0 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627875 4769 generic.go:334] "Generic (PLEG): container finished" podID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" exitCode=0 Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627896 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627908 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627936 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.627925 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.628099 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.628111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.628122 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11b92673-89ea-4ef5-87f5-743e06fcb861","Type":"ContainerDied","Data":"90073d3abb4df7c2f402c287b27f42fcd53565c4e6e648db72612d0dd2e0511c"} Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.647722 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.676948 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11b92673-89ea-4ef5-87f5-743e06fcb861-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.678005 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.682032 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.691831 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709246 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709573 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709587 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709599 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709606 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709619 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709627 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709642 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709648 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709700 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709706 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709718 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709726 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709737 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709743 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.709753 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709758 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709964 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709976 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709991 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="proxy-httpd" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.709999 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-notification-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710009 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" containerName="mariadb-database-create" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710017 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="sg-core" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710287 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" containerName="ceilometer-central-agent" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.710305 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" containerName="mariadb-account-create-update" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.711898 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.711989 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.762893 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.762997 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.781943 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.882639 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883071 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883107 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883200 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883224 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883270 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.883320 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.898493 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11b92673-89ea-4ef5-87f5-743e06fcb861" path="/var/lib/kubelet/pods/11b92673-89ea-4ef5-87f5-743e06fcb861/volumes" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.961493 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.962032 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962109 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962134 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.962397 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962423 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962439 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.962716 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962737 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.962750 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: E0122 14:03:04.963071 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963104 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963131 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963361 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963390 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963636 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.963660 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964278 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964301 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964551 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964575 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964741 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964762 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964951 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.964973 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965224 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965242 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965484 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965551 4769 scope.go:117] "RemoveContainer" containerID="043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965943 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038"} err="failed to get container status \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": rpc error: code = NotFound desc = could not find container \"043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038\": container with ID starting with 043ee8a04c5433a5a41fc8257ea6c5b0090b5249d686422b5b4e2620f92f0038 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.965967 4769 scope.go:117] "RemoveContainer" containerID="8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966209 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2"} err="failed to get container status \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": rpc error: code = NotFound desc = could not find container \"8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2\": container with ID starting with 8e7e24d69f5473c0a9b871e786b088228a456a7e3d57efd297e1d5f2dce38de2 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966231 4769 scope.go:117] "RemoveContainer" containerID="d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966553 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578"} err="failed to get container status \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": rpc error: code = NotFound desc = could not find container \"d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578\": container with ID starting with d08590a7a0c967986a2faec48ac3132b3f1e6ae845f20966fe975a096a748578 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966579 4769 scope.go:117] "RemoveContainer" containerID="cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.966836 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64"} err="failed to get container status \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": rpc error: code = NotFound desc = could not find container \"cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64\": container with ID starting with cc12fb32b2246b53fd5e4f662494ad847d5f628bbbf6ac0a4c83be5adb1c2b64 not found: ID does not exist" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.985887 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.985947 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.985974 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986100 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986150 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.986242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.988967 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:04 crc kubenswrapper[4769]: I0122 14:03:04.989782 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:04.995718 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:04.996351 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.001415 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.013587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.031228 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.105120 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.110831 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.114812 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.115094 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.115270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hh9r6" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.129041 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.142757 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189173 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189414 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.189454 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.248925 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291348 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291723 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291770 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291828 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.291855 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292016 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292050 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") pod \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\" (UID: \"aa581bf8-802c-4c64-80fe-83a1baf50a6e\") " Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292353 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292560 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.292607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.295466 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs" (OuterVolumeSpecName: "logs") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.297844 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.299094 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.299376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.301542 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px" (OuterVolumeSpecName: "kube-api-access-pv2px") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "kube-api-access-pv2px". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.304074 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.311454 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"nova-cell0-conductor-db-sync-hql94\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.321243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts" (OuterVolumeSpecName: "scripts") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.325454 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data" (OuterVolumeSpecName: "config-data") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.326552 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.358614 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "aa581bf8-802c-4c64-80fe-83a1baf50a6e" (UID: "aa581bf8-802c-4c64-80fe-83a1baf50a6e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394302 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394340 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aa581bf8-802c-4c64-80fe-83a1baf50a6e-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394350 4769 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394359 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394367 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa581bf8-802c-4c64-80fe-83a1baf50a6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394376 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv2px\" (UniqueName: \"kubernetes.io/projected/aa581bf8-802c-4c64-80fe-83a1baf50a6e-kube-api-access-pv2px\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.394386 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/aa581bf8-802c-4c64-80fe-83a1baf50a6e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.456715 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653093 4769 generic.go:334] "Generic (PLEG): container finished" podID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" exitCode=137 Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653405 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerDied","Data":"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79"} Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6464b9bcc6-tjgjv" event={"ID":"aa581bf8-802c-4c64-80fe-83a1baf50a6e","Type":"ContainerDied","Data":"a21b69f798a23fdcfdfb92adcc62b30839c1be6a1c5c04d00a869ead5ddc22a7"} Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653473 4769 scope.go:117] "RemoveContainer" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.653494 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6464b9bcc6-tjgjv" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.694598 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.704242 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.716477 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6464b9bcc6-tjgjv"] Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.838115 4769 scope.go:117] "RemoveContainer" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" Jan 22 14:03:05 crc kubenswrapper[4769]: W0122 14:03:05.853118 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a08cae6_6172_4bb5_9145_4bd967ff8652.slice/crio-7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3 WatchSource:0}: Error finding container 7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3: Status 404 returned error can't find the container with id 7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3 Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.908994 4769 scope.go:117] "RemoveContainer" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" Jan 22 14:03:05 crc kubenswrapper[4769]: E0122 14:03:05.910472 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a\": container with ID starting with dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a not found: ID does not exist" containerID="dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.910511 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a"} err="failed to get container status \"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a\": rpc error: code = NotFound desc = could not find container \"dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a\": container with ID starting with dc2e4c5fd0438679984690345cbc0e4820ff234a30678389437d5d203ba8a03a not found: ID does not exist" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.910530 4769 scope.go:117] "RemoveContainer" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" Jan 22 14:03:05 crc kubenswrapper[4769]: E0122 14:03:05.910903 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79\": container with ID starting with b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79 not found: ID does not exist" containerID="b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.910930 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79"} err="failed to get container status \"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79\": rpc error: code = NotFound desc = could not find container \"b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79\": container with ID starting with b1c17d223ae3c6e1952926e3cf792e852ecbb7c481e6bf6d9e1501d916e79b79 not found: ID does not exist" Jan 22 14:03:05 crc kubenswrapper[4769]: I0122 14:03:05.957219 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.664649 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerStarted","Data":"1de90ac29d18bc8134c5a8f9409cf4f6984104454efcb5cd68aa76ba8988c519"} Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.665941 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803"} Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.665963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3"} Jan 22 14:03:06 crc kubenswrapper[4769]: I0122 14:03:06.898237 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" path="/var/lib/kubelet/pods/aa581bf8-802c-4c64-80fe-83a1baf50a6e/volumes" Jan 22 14:03:07 crc kubenswrapper[4769]: I0122 14:03:07.687664 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817"} Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.097184 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.097268 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.145715 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.145840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.705351 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e"} Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.705398 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:03:08 crc kubenswrapper[4769]: I0122 14:03:08.705563 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 22 14:03:09 crc kubenswrapper[4769]: I0122 14:03:09.738686 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerStarted","Data":"146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17"} Jan 22 14:03:09 crc kubenswrapper[4769]: I0122 14:03:09.739105 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:03:09 crc kubenswrapper[4769]: I0122 14:03:09.765953 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.645435591 podStartE2EDuration="5.765935761s" podCreationTimestamp="2026-01-22 14:03:04 +0000 UTC" firstStartedPulling="2026-01-22 14:03:05.856152982 +0000 UTC m=+1165.267262911" lastFinishedPulling="2026-01-22 14:03:08.976653152 +0000 UTC m=+1168.387763081" observedRunningTime="2026-01-22 14:03:09.761673396 +0000 UTC m=+1169.172783325" watchObservedRunningTime="2026-01-22 14:03:09.765935761 +0000 UTC m=+1169.177045690" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.201098 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.201160 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.246287 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.275236 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.481607 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.481705 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.481780 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.482652 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.482721 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa" gracePeriod=600 Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759723 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa" exitCode=0 Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759882 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa"} Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759949 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759964 4769 scope.go:117] "RemoveContainer" containerID="ee8cd9f7d29583d39d5d09ca76eab4931e04c9d5e08aa5de68839001387a3d8e" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.759968 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.760777 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.760887 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.917439 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:03:10 crc kubenswrapper[4769]: I0122 14:03:10.923746 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.089653 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769066 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" containerID="cri-o://d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803" gracePeriod=30 Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769114 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" containerID="cri-o://0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e" gracePeriod=30 Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769141 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" containerID="cri-o://146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17" gracePeriod=30 Jan 22 14:03:11 crc kubenswrapper[4769]: I0122 14:03:11.769177 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" containerID="cri-o://cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817" gracePeriod=30 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.803692 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17" exitCode=0 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.803736 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e" exitCode=2 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.803746 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817" exitCode=0 Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.804600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17"} Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.804632 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e"} Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.804644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817"} Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.985304 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:12 crc kubenswrapper[4769]: I0122 14:03:12.985745 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 14:03:13 crc kubenswrapper[4769]: I0122 14:03:13.102505 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 22 14:03:15 crc kubenswrapper[4769]: I0122 14:03:15.831148 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerStarted","Data":"18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090"} Jan 22 14:03:15 crc kubenswrapper[4769]: I0122 14:03:15.835179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f"} Jan 22 14:03:15 crc kubenswrapper[4769]: I0122 14:03:15.856585 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-hql94" podStartSLOduration=1.277467406 podStartE2EDuration="10.856563317s" podCreationTimestamp="2026-01-22 14:03:05 +0000 UTC" firstStartedPulling="2026-01-22 14:03:05.968539749 +0000 UTC m=+1165.379649678" lastFinishedPulling="2026-01-22 14:03:15.54763566 +0000 UTC m=+1174.958745589" observedRunningTime="2026-01-22 14:03:15.849532756 +0000 UTC m=+1175.260642695" watchObservedRunningTime="2026-01-22 14:03:15.856563317 +0000 UTC m=+1175.267673246" Jan 22 14:03:16 crc kubenswrapper[4769]: I0122 14:03:16.850972 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerID="d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803" exitCode=0 Jan 22 14:03:16 crc kubenswrapper[4769]: I0122 14:03:16.852500 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803"} Jan 22 14:03:16 crc kubenswrapper[4769]: I0122 14:03:16.977196 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121527 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121604 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121713 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.121768 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.122288 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123234 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123560 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123671 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.123743 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") pod \"0a08cae6-6172-4bb5-9145-4bd967ff8652\" (UID: \"0a08cae6-6172-4bb5-9145-4bd967ff8652\") " Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.125040 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.125066 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a08cae6-6172-4bb5-9145-4bd967ff8652-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.138093 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq" (OuterVolumeSpecName: "kube-api-access-rfbwq") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "kube-api-access-rfbwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.138108 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts" (OuterVolumeSpecName: "scripts") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.164847 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.218046 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226693 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226736 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfbwq\" (UniqueName: \"kubernetes.io/projected/0a08cae6-6172-4bb5-9145-4bd967ff8652-kube-api-access-rfbwq\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226750 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.226762 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.240589 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data" (OuterVolumeSpecName: "config-data") pod "0a08cae6-6172-4bb5-9145-4bd967ff8652" (UID: "0a08cae6-6172-4bb5-9145-4bd967ff8652"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.328361 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a08cae6-6172-4bb5-9145-4bd967ff8652-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.865227 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a08cae6-6172-4bb5-9145-4bd967ff8652","Type":"ContainerDied","Data":"7ffdcceaf87e641941223d39bc52c74d8b48c68c5b706146fd42462d26c3e6b3"} Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.865285 4769 scope.go:117] "RemoveContainer" containerID="146393a28b927d8945f4f1b9a4097563dd6740cfea691a9bb5aea3d8298c2c17" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.865443 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.939580 4769 scope.go:117] "RemoveContainer" containerID="0571a53b9fe417d9b61564df6d67f72bf69a843d7774f1a04bd4b9a4c1ff791e" Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.942597 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.951347 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:17 crc kubenswrapper[4769]: I0122 14:03:17.960883 4769 scope.go:117] "RemoveContainer" containerID="cf03c5b093b848a8cf13336a277b6c6c320c64735982f0770e348eccb16fc817" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.004889 4769 scope.go:117] "RemoveContainer" containerID="d12511606cbb3d139c42c9505d216c13a6b0282888ddd4e9ceca736cd31a0803" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.013698 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014167 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014191 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014202 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014209 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014228 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014234 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014254 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014261 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014275 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014282 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: E0122 14:03:18.014290 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014295 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014449 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014460 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-notification-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014476 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="ceilometer-central-agent" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014487 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="proxy-httpd" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014495 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa581bf8-802c-4c64-80fe-83a1baf50a6e" containerName="horizon-log" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.014505 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" containerName="sg-core" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.016233 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.020254 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.020538 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.039267 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143556 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143592 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143614 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143651 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143677 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.143749 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245518 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245625 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245730 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.245958 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.247823 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.247826 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.253378 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.253395 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.254098 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.254346 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.265831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"ceilometer-0\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.334091 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.782545 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:18 crc kubenswrapper[4769]: W0122 14:03:18.789005 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2da17df6_1c4c_453a_9943_4a44e8a14664.slice/crio-63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0 WatchSource:0}: Error finding container 63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0: Status 404 returned error can't find the container with id 63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0 Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.877367 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0"} Jan 22 14:03:18 crc kubenswrapper[4769]: I0122 14:03:18.893245 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a08cae6-6172-4bb5-9145-4bd967ff8652" path="/var/lib/kubelet/pods/0a08cae6-6172-4bb5-9145-4bd967ff8652/volumes" Jan 22 14:03:19 crc kubenswrapper[4769]: I0122 14:03:19.887947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46"} Jan 22 14:03:23 crc kubenswrapper[4769]: I0122 14:03:23.930865 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5"} Jan 22 14:03:24 crc kubenswrapper[4769]: I0122 14:03:24.941167 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947"} Jan 22 14:03:26 crc kubenswrapper[4769]: I0122 14:03:26.963169 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerStarted","Data":"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533"} Jan 22 14:03:26 crc kubenswrapper[4769]: I0122 14:03:26.965059 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:03:26 crc kubenswrapper[4769]: I0122 14:03:26.988701 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.898638312 podStartE2EDuration="9.988681102s" podCreationTimestamp="2026-01-22 14:03:17 +0000 UTC" firstStartedPulling="2026-01-22 14:03:18.791229995 +0000 UTC m=+1178.202339924" lastFinishedPulling="2026-01-22 14:03:25.881272785 +0000 UTC m=+1185.292382714" observedRunningTime="2026-01-22 14:03:26.986496673 +0000 UTC m=+1186.397606602" watchObservedRunningTime="2026-01-22 14:03:26.988681102 +0000 UTC m=+1186.399791021" Jan 22 14:03:27 crc kubenswrapper[4769]: I0122 14:03:27.972448 4769 generic.go:334] "Generic (PLEG): container finished" podID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerID="18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090" exitCode=0 Jan 22 14:03:27 crc kubenswrapper[4769]: I0122 14:03:27.972651 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerDied","Data":"18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090"} Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.310347 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478461 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478521 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478546 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.478675 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") pod \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\" (UID: \"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf\") " Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.497185 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts" (OuterVolumeSpecName: "scripts") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.497282 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm" (OuterVolumeSpecName: "kube-api-access-rsjnm") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "kube-api-access-rsjnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.507593 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.517815 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data" (OuterVolumeSpecName: "config-data") pod "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" (UID: "4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581016 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581071 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581084 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.581096 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsjnm\" (UniqueName: \"kubernetes.io/projected/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf-kube-api-access-rsjnm\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.991485 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-hql94" event={"ID":"4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf","Type":"ContainerDied","Data":"1de90ac29d18bc8134c5a8f9409cf4f6984104454efcb5cd68aa76ba8988c519"} Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.991562 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1de90ac29d18bc8134c5a8f9409cf4f6984104454efcb5cd68aa76ba8988c519" Jan 22 14:03:29 crc kubenswrapper[4769]: I0122 14:03:29.991628 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-hql94" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.095285 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 14:03:30 crc kubenswrapper[4769]: E0122 14:03:30.095656 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerName="nova-cell0-conductor-db-sync" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.095671 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerName="nova-cell0-conductor-db-sync" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.095863 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" containerName="nova-cell0-conductor-db-sync" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.096447 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.098283 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-hh9r6" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.100185 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.114974 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.192001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw5zz\" (UniqueName: \"kubernetes.io/projected/66c7ff68-1167-4dbe-8e53-40f378941703-kube-api-access-qw5zz\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.192328 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.192515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.294718 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw5zz\" (UniqueName: \"kubernetes.io/projected/66c7ff68-1167-4dbe-8e53-40f378941703-kube-api-access-qw5zz\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.295325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.296013 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.298679 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.298690 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66c7ff68-1167-4dbe-8e53-40f378941703-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.323204 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw5zz\" (UniqueName: \"kubernetes.io/projected/66c7ff68-1167-4dbe-8e53-40f378941703-kube-api-access-qw5zz\") pod \"nova-cell0-conductor-0\" (UID: \"66c7ff68-1167-4dbe-8e53-40f378941703\") " pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.415022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.840969 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 22 14:03:30 crc kubenswrapper[4769]: I0122 14:03:30.999364 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66c7ff68-1167-4dbe-8e53-40f378941703","Type":"ContainerStarted","Data":"a1838467705c040eed132bd26af467e185ad5b62ad067843c8fdb68816dba547"} Jan 22 14:03:32 crc kubenswrapper[4769]: I0122 14:03:32.009264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66c7ff68-1167-4dbe-8e53-40f378941703","Type":"ContainerStarted","Data":"2b6a5c6e1d7554b7db842372acbbecfc1c2c021f82e87bb8ae526d0c7a33a714"} Jan 22 14:03:32 crc kubenswrapper[4769]: I0122 14:03:32.009652 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:32 crc kubenswrapper[4769]: I0122 14:03:32.039308 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.039284859 podStartE2EDuration="2.039284859s" podCreationTimestamp="2026-01-22 14:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:32.030521972 +0000 UTC m=+1191.441631911" watchObservedRunningTime="2026-01-22 14:03:32.039284859 +0000 UTC m=+1191.450394808" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.443634 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.919329 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.924584 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.932898 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.933065 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.934618 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986755 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986847 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986925 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:40 crc kubenswrapper[4769]: I0122 14:03:40.986991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088432 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088504 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088581 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.088637 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.098301 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.098441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.122458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.127458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"nova-cell0-cell-mapping-6vgx7\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.164897 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.167306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.177933 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.178258 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.180089 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.195723 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.201352 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.245705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.257678 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297073 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297137 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297180 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297214 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297303 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297330 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.297770 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.346872 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.348675 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.354136 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.356557 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.368003 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.379883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.381316 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.400315 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401281 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401391 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401449 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401482 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401504 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401552 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401613 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401642 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401735 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401768 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401825 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401848 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401878 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.401911 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.403200 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.404021 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.412350 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.412653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.417038 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.424477 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.425045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.435494 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"nova-api-0\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.436441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"nova-metadata-0\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.445110 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.472908 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506487 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506551 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506596 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506720 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506766 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506898 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.506990 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.507053 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.507087 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.507121 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.509175 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.509821 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.510052 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.511481 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.513466 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.515565 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.529612 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.537107 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"dnsmasq-dns-845d6d6f59-hb2xg\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.539218 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.547380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.579160 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.609200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.609370 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.609403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.625457 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.630965 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.632298 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"nova-scheduler-0\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.807680 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.848033 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:41 crc kubenswrapper[4769]: I0122 14:03:41.914130 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.018391 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:03:42 crc kubenswrapper[4769]: W0122 14:03:42.052880 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3137766d_8b45_47a0_a7ca_f1a3c381450d.slice/crio-0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938 WatchSource:0}: Error finding container 0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938: Status 404 returned error can't find the container with id 0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938 Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.212256 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerStarted","Data":"0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938"} Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.232632 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.285223 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.383430 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.396438 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.397699 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.400353 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.400952 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.407160 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448289 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448327 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.448370 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550297 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550342 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.550373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.554773 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.555261 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.559481 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.570457 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"nova-cell1-conductor-db-sync-cg5m6\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.654704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: W0122 14:03:42.699534 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1f2c596_25ff_4c08_9b23_b90aca949e45.slice/crio-8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754 WatchSource:0}: Error finding container 8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754: Status 404 returned error can't find the container with id 8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754 Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.716232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:42 crc kubenswrapper[4769]: I0122 14:03:42.742203 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:42 crc kubenswrapper[4769]: W0122 14:03:42.753854 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9c060e2_5b33_4452_bc58_2ce6e9f865d4.slice/crio-d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a WatchSource:0}: Error finding container d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a: Status 404 returned error can't find the container with id d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.217238 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.223387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerStarted","Data":"d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.224821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerStarted","Data":"8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.233905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerStarted","Data":"7522f136416e24ddb1e2da868b4df82fccac17698bad3fc0cffb8764c95aa35e"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.237230 4769 generic.go:334] "Generic (PLEG): container finished" podID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" exitCode=0 Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.237280 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerDied","Data":"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.237340 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerStarted","Data":"07ff2a18726b3f734621e81451a91539db3bacf8cce99d939c1f38660bd71e0c"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.246323 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerStarted","Data":"3f6efd7484c8f82f7294e9fc3f2dedfa64a83c4e487c60f5f3d00b72dea2aeff"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.254973 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerStarted","Data":"7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30"} Jan 22 14:03:43 crc kubenswrapper[4769]: I0122 14:03:43.295132 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6vgx7" podStartSLOduration=3.295113053 podStartE2EDuration="3.295113053s" podCreationTimestamp="2026-01-22 14:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:43.287068836 +0000 UTC m=+1202.698178765" watchObservedRunningTime="2026-01-22 14:03:43.295113053 +0000 UTC m=+1202.706222982" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.272308 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerStarted","Data":"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb"} Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.274340 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.280478 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerStarted","Data":"b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03"} Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.280538 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerStarted","Data":"4281687c125bb60dc1e9c561adac44c125c994b9787a7a132375bd1d9a17e1e3"} Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.348258 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" podStartSLOduration=2.348236136 podStartE2EDuration="2.348236136s" podCreationTimestamp="2026-01-22 14:03:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:44.32623331 +0000 UTC m=+1203.737343249" watchObservedRunningTime="2026-01-22 14:03:44.348236136 +0000 UTC m=+1203.759346065" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.352267 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" podStartSLOduration=3.3522513050000002 podStartE2EDuration="3.352251305s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:44.302206387 +0000 UTC m=+1203.713316326" watchObservedRunningTime="2026-01-22 14:03:44.352251305 +0000 UTC m=+1203.763361234" Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.648689 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:44 crc kubenswrapper[4769]: I0122 14:03:44.659889 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.330736 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerStarted","Data":"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.332264 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerStarted","Data":"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.332379 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" gracePeriod=30 Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.334758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerStarted","Data":"05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.334803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerStarted","Data":"6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.337766 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerStarted","Data":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.337818 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerStarted","Data":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.337893 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" containerID="cri-o://9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" gracePeriod=30 Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.338015 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" containerID="cri-o://c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" gracePeriod=30 Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.353667 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.902607257 podStartE2EDuration="6.353651896s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.759066609 +0000 UTC m=+1202.170176538" lastFinishedPulling="2026-01-22 14:03:46.210111248 +0000 UTC m=+1205.621221177" observedRunningTime="2026-01-22 14:03:47.350749278 +0000 UTC m=+1206.761859207" watchObservedRunningTime="2026-01-22 14:03:47.353651896 +0000 UTC m=+1206.764761825" Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.380287 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.472941472 podStartE2EDuration="6.380263839s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.292205124 +0000 UTC m=+1201.703315053" lastFinishedPulling="2026-01-22 14:03:46.199527471 +0000 UTC m=+1205.610637420" observedRunningTime="2026-01-22 14:03:47.371078849 +0000 UTC m=+1206.782188788" watchObservedRunningTime="2026-01-22 14:03:47.380263839 +0000 UTC m=+1206.791373768" Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.387560 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.467106224 podStartE2EDuration="6.387541206s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.28689889 +0000 UTC m=+1201.698008819" lastFinishedPulling="2026-01-22 14:03:46.207333872 +0000 UTC m=+1205.618443801" observedRunningTime="2026-01-22 14:03:47.385777719 +0000 UTC m=+1206.796887658" watchObservedRunningTime="2026-01-22 14:03:47.387541206 +0000 UTC m=+1206.798651135" Jan 22 14:03:47 crc kubenswrapper[4769]: I0122 14:03:47.413673 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.9156870010000002 podStartE2EDuration="6.413654095s" podCreationTimestamp="2026-01-22 14:03:41 +0000 UTC" firstStartedPulling="2026-01-22 14:03:42.701613649 +0000 UTC m=+1202.112723578" lastFinishedPulling="2026-01-22 14:03:46.199580743 +0000 UTC m=+1205.610690672" observedRunningTime="2026-01-22 14:03:47.407685114 +0000 UTC m=+1206.818795043" watchObservedRunningTime="2026-01-22 14:03:47.413654095 +0000 UTC m=+1206.824764024" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.301588 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.339110 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348835 4769 generic.go:334] "Generic (PLEG): container finished" podID="bba74422-5547-4700-919b-fd9707feaf8d" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" exitCode=0 Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348863 4769 generic.go:334] "Generic (PLEG): container finished" podID="bba74422-5547-4700-919b-fd9707feaf8d" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" exitCode=143 Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348907 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerDied","Data":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348923 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerDied","Data":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bba74422-5547-4700-919b-fd9707feaf8d","Type":"ContainerDied","Data":"3f6efd7484c8f82f7294e9fc3f2dedfa64a83c4e487c60f5f3d00b72dea2aeff"} Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.348980 4769 scope.go:117] "RemoveContainer" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.370129 4769 scope.go:117] "RemoveContainer" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390184 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390282 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390309 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.390427 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") pod \"bba74422-5547-4700-919b-fd9707feaf8d\" (UID: \"bba74422-5547-4700-919b-fd9707feaf8d\") " Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.394128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs" (OuterVolumeSpecName: "logs") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.396075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh" (OuterVolumeSpecName: "kube-api-access-46dlh") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "kube-api-access-46dlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.426943 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data" (OuterVolumeSpecName: "config-data") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.433545 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bba74422-5547-4700-919b-fd9707feaf8d" (UID: "bba74422-5547-4700-919b-fd9707feaf8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.479211 4769 scope.go:117] "RemoveContainer" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.479727 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": container with ID starting with c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271 not found: ID does not exist" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.479763 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} err="failed to get container status \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": rpc error: code = NotFound desc = could not find container \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": container with ID starting with c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271 not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.479808 4769 scope.go:117] "RemoveContainer" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.480174 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": container with ID starting with 9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc not found: ID does not exist" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480197 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} err="failed to get container status \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": rpc error: code = NotFound desc = could not find container \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": container with ID starting with 9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480210 4769 scope.go:117] "RemoveContainer" containerID="c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480462 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271"} err="failed to get container status \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": rpc error: code = NotFound desc = could not find container \"c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271\": container with ID starting with c849bc4cd2b0b4f1d280dbc38ffb8f221095344c72df5a62a2c4b5f5b13cb271 not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480481 4769 scope.go:117] "RemoveContainer" containerID="9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.480741 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc"} err="failed to get container status \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": rpc error: code = NotFound desc = could not find container \"9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc\": container with ID starting with 9cabcd9a1b25026195fe87254dcb14f9a323c2b720deddf305f2f02bf4a074fc not found: ID does not exist" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492374 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46dlh\" (UniqueName: \"kubernetes.io/projected/bba74422-5547-4700-919b-fd9707feaf8d-kube-api-access-46dlh\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492409 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492457 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bba74422-5547-4700-919b-fd9707feaf8d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.492468 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bba74422-5547-4700-919b-fd9707feaf8d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.682196 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.692038 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.708981 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.709458 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709479 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" Jan 22 14:03:48 crc kubenswrapper[4769]: E0122 14:03:48.709499 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709508 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709729 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-log" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.709760 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba74422-5547-4700-919b-fd9707feaf8d" containerName="nova-metadata-metadata" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.710945 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.713188 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.715162 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.718616 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798492 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798603 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798725 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798850 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.798941 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.896044 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba74422-5547-4700-919b-fd9707feaf8d" path="/var/lib/kubelet/pods/bba74422-5547-4700-919b-fd9707feaf8d/volumes" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900781 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900927 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.900966 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.901007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.901178 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.905672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.905913 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.906708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:48 crc kubenswrapper[4769]: I0122 14:03:48.921480 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"nova-metadata-0\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " pod="openstack/nova-metadata-0" Jan 22 14:03:49 crc kubenswrapper[4769]: I0122 14:03:49.038546 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:49 crc kubenswrapper[4769]: I0122 14:03:49.408029 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:49 crc kubenswrapper[4769]: W0122 14:03:49.416178 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a025db2_7758_45ec_a6dc_d5bbd07e339b.slice/crio-dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77 WatchSource:0}: Error finding container dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77: Status 404 returned error can't find the container with id dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77 Jan 22 14:03:50 crc kubenswrapper[4769]: I0122 14:03:50.399634 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerStarted","Data":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} Jan 22 14:03:50 crc kubenswrapper[4769]: I0122 14:03:50.399975 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerStarted","Data":"dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77"} Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.530902 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.531439 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.809967 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.850233 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.878219 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.878509 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" containerID="cri-o://fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8" gracePeriod=10 Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.915004 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.915048 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 14:03:51 crc kubenswrapper[4769]: I0122 14:03:51.971936 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424068 4769 generic.go:334] "Generic (PLEG): container finished" podID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerID="fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8" exitCode=0 Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424505 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerDied","Data":"fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8"} Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424537 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" event={"ID":"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4","Type":"ContainerDied","Data":"d6c99dc7e96389aa270b082a25059df7fce55051d25083a5534ef853a5abe126"} Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.424567 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6c99dc7e96389aa270b082a25059df7fce55051d25083a5534ef853a5abe126" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.427135 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerStarted","Data":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.451922 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.451900258 podStartE2EDuration="4.451900258s" podCreationTimestamp="2026-01-22 14:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:52.448229258 +0000 UTC m=+1211.859339187" watchObservedRunningTime="2026-01-22 14:03:52.451900258 +0000 UTC m=+1211.863010197" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.478344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.501706 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.583814 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584021 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584288 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584389 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.584527 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") pod \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\" (UID: \"e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4\") " Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.591511 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr" (OuterVolumeSpecName: "kube-api-access-lw4nr") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "kube-api-access-lw4nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.621136 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.621161 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.185:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.652194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config" (OuterVolumeSpecName: "config") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.661362 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.667670 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.675327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686655 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686692 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686705 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4nr\" (UniqueName: \"kubernetes.io/projected/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-kube-api-access-lw4nr\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686715 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.686725 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.706613 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" (UID: "e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:03:52 crc kubenswrapper[4769]: I0122 14:03:52.788769 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.436576 4769 generic.go:334] "Generic (PLEG): container finished" podID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerID="7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30" exitCode=0 Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.436673 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerDied","Data":"7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30"} Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.437144 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-gjxrr" Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.474736 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.483526 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-gjxrr"] Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.675434 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:53 crc kubenswrapper[4769]: I0122 14:03:53.675659 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" containerID="cri-o://b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" gracePeriod=30 Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.039160 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.039518 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.189104 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.219328 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") pod \"6e7522e6-de75-492d-b445-a463f875e393\" (UID: \"6e7522e6-de75-492d-b445-a463f875e393\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.228096 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt" (OuterVolumeSpecName: "kube-api-access-9fdpt") pod "6e7522e6-de75-492d-b445-a463f875e393" (UID: "6e7522e6-de75-492d-b445-a463f875e393"). InnerVolumeSpecName "kube-api-access-9fdpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.322992 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fdpt\" (UniqueName: \"kubernetes.io/projected/6e7522e6-de75-492d-b445-a463f875e393-kube-api-access-9fdpt\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.449698 4769 generic.go:334] "Generic (PLEG): container finished" podID="6e7522e6-de75-492d-b445-a463f875e393" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" exitCode=2 Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450886 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerDied","Data":"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f"} Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450917 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e7522e6-de75-492d-b445-a463f875e393","Type":"ContainerDied","Data":"cb0f27b9c3686fd6437f8bd8519d2239c1ac22e630bed57eba5dc3bb400528c4"} Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.450934 4769 scope.go:117] "RemoveContainer" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.513849 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.527525 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.536234 4769 scope.go:117] "RemoveContainer" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.542290 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f\": container with ID starting with b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f not found: ID does not exist" containerID="b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.542486 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f"} err="failed to get container status \"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f\": rpc error: code = NotFound desc = could not find container \"b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f\": container with ID starting with b5c1102409d5a3f0491aca7b10a914b1f650214297aaff7b15a9e7d0fb19780f not found: ID does not exist" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.548676 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.549354 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549423 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.549498 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="init" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549550 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="init" Jan 22 14:03:54 crc kubenswrapper[4769]: E0122 14:03:54.549638 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549690 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549937 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" containerName="dnsmasq-dns" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.549999 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7522e6-de75-492d-b445-a463f875e393" containerName="kube-state-metrics" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.550673 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.553041 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.554107 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.579223 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.634864 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.634921 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.635028 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.635127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sn7h\" (UniqueName: \"kubernetes.io/projected/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-api-access-4sn7h\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737810 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737852 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.737882 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sn7h\" (UniqueName: \"kubernetes.io/projected/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-api-access-4sn7h\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.742993 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.752074 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.761272 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.764255 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sn7h\" (UniqueName: \"kubernetes.io/projected/27867d6f-28eb-45b6-afd4-9ad9da5a5a0f-kube-api-access-4sn7h\") pod \"kube-state-metrics-0\" (UID: \"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f\") " pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.857642 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.891490 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.919374 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7522e6-de75-492d-b445-a463f875e393" path="/var/lib/kubelet/pods/6e7522e6-de75-492d-b445-a463f875e393/volumes" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.919962 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4" path="/var/lib/kubelet/pods/e41e0eab-0c56-48e0-a36d-ccbdd73ea0f4/volumes" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.943450 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.943771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.943985 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.944304 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") pod \"3137766d-8b45-47a0-a7ca-f1a3c381450d\" (UID: \"3137766d-8b45-47a0-a7ca-f1a3c381450d\") " Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.949777 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x" (OuterVolumeSpecName: "kube-api-access-qpn9x") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "kube-api-access-qpn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.949947 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts" (OuterVolumeSpecName: "scripts") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.979270 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:54 crc kubenswrapper[4769]: I0122 14:03:54.985410 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data" (OuterVolumeSpecName: "config-data") pod "3137766d-8b45-47a0-a7ca-f1a3c381450d" (UID: "3137766d-8b45-47a0-a7ca-f1a3c381450d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.046650 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpn9x\" (UniqueName: \"kubernetes.io/projected/3137766d-8b45-47a0-a7ca-f1a3c381450d-kube-api-access-qpn9x\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.047003 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.047016 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.047029 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3137766d-8b45-47a0-a7ca-f1a3c381450d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.386769 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: W0122 14:03:55.392244 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27867d6f_28eb_45b6_afd4_9ad9da5a5a0f.slice/crio-ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1 WatchSource:0}: Error finding container ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1: Status 404 returned error can't find the container with id ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.459114 4769 generic.go:334] "Generic (PLEG): container finished" podID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerID="b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03" exitCode=0 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.459193 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerDied","Data":"b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03"} Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.463218 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6vgx7" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.463223 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6vgx7" event={"ID":"3137766d-8b45-47a0-a7ca-f1a3c381450d","Type":"ContainerDied","Data":"0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938"} Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.463545 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1ef3a355c24af9ddca6d17ce3327e51772b713889345a7a1b20a2fbc113938" Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.464905 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f","Type":"ContainerStarted","Data":"ef096966a058709f0ff12d92b098282c0025220288546b81e1a37f5c81c924f1"} Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.587277 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.587493 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" containerID="cri-o://936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.604765 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.605122 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" containerID="cri-o://6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.605195 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" containerID="cri-o://05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.618055 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.978775 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979386 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" containerID="cri-o://15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979497 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" containerID="cri-o://b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979464 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" containerID="cri-o://18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" gracePeriod=30 Jan 22 14:03:55 crc kubenswrapper[4769]: I0122 14:03:55.979633 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" containerID="cri-o://0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" gracePeriod=30 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.479844 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerID="6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434" exitCode=143 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.479908 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerDied","Data":"6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487025 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" exitCode=0 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487065 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" exitCode=2 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487084 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.487122 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.488881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27867d6f-28eb-45b6-afd4-9ad9da5a5a0f","Type":"ContainerStarted","Data":"1336d5463792b849ea5857a986cf5130df43494f713c418bbe274849cf16ec71"} Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.489177 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" containerID="cri-o://8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" gracePeriod=30 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.489243 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" containerID="cri-o://e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" gracePeriod=30 Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.537262 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.188636513 podStartE2EDuration="2.537235878s" podCreationTimestamp="2026-01-22 14:03:54 +0000 UTC" firstStartedPulling="2026-01-22 14:03:55.395047446 +0000 UTC m=+1214.806157375" lastFinishedPulling="2026-01-22 14:03:55.743646811 +0000 UTC m=+1215.154756740" observedRunningTime="2026-01-22 14:03:56.515719074 +0000 UTC m=+1215.926829013" watchObservedRunningTime="2026-01-22 14:03:56.537235878 +0000 UTC m=+1215.948345807" Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.901475 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.921064 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.925250 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.938144 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 22 14:03:56 crc kubenswrapper[4769]: E0122 14:03:56.938213 4769 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.950903 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.998036 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.998080 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:56 crc kubenswrapper[4769]: I0122 14:03:56.998108 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") pod \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.008020 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5" (OuterVolumeSpecName: "kube-api-access-pggb5") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "kube-api-access-pggb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.008356 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5" (OuterVolumeSpecName: "kube-api-access-9nlr5") pod "c9c060e2-5b33-4452-bc58-2ce6e9f865d4" (UID: "c9c060e2-5b33-4452-bc58-2ce6e9f865d4"). InnerVolumeSpecName "kube-api-access-9nlr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.030430 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data" (OuterVolumeSpecName: "config-data") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.086670 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099316 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099369 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099403 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099475 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") pod \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099503 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") pod \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\" (UID: \"c9c060e2-5b33-4452-bc58-2ce6e9f865d4\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.099542 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") pod \"60fa7062-c4e9-4700-88e1-af5262989c6f\" (UID: \"60fa7062-c4e9-4700-88e1-af5262989c6f\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.100037 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.100063 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggb5\" (UniqueName: \"kubernetes.io/projected/60fa7062-c4e9-4700-88e1-af5262989c6f-kube-api-access-pggb5\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.100076 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlr5\" (UniqueName: \"kubernetes.io/projected/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-kube-api-access-9nlr5\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.107235 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts" (OuterVolumeSpecName: "scripts") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.140061 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb" (OuterVolumeSpecName: "kube-api-access-2gqfb") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "kube-api-access-2gqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.140432 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9c060e2-5b33-4452-bc58-2ce6e9f865d4" (UID: "c9c060e2-5b33-4452-bc58-2ce6e9f865d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.144047 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60fa7062-c4e9-4700-88e1-af5262989c6f" (UID: "60fa7062-c4e9-4700-88e1-af5262989c6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.165254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data" (OuterVolumeSpecName: "config-data") pod "c9c060e2-5b33-4452-bc58-2ce6e9f865d4" (UID: "c9c060e2-5b33-4452-bc58-2ce6e9f865d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.183167 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.200996 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201120 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201150 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") pod \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\" (UID: \"7a025db2-7758-45ec-a6dc-d5bbd07e339b\") " Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201874 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201901 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gqfb\" (UniqueName: \"kubernetes.io/projected/7a025db2-7758-45ec-a6dc-d5bbd07e339b-kube-api-access-2gqfb\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201915 4769 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201928 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201940 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9c060e2-5b33-4452-bc58-2ce6e9f865d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.201953 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60fa7062-c4e9-4700-88e1-af5262989c6f-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.202136 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs" (OuterVolumeSpecName: "logs") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.232990 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data" (OuterVolumeSpecName: "config-data") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.235126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a025db2-7758-45ec-a6dc-d5bbd07e339b" (UID: "7a025db2-7758-45ec-a6dc-d5bbd07e339b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.304160 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.304209 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a025db2-7758-45ec-a6dc-d5bbd07e339b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.304220 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a025db2-7758-45ec-a6dc-d5bbd07e339b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500716 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" exitCode=0 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerDied","Data":"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c9c060e2-5b33-4452-bc58-2ce6e9f865d4","Type":"ContainerDied","Data":"d4ad591c838bef0b0a89079c05faf04520570b378a76c8d398873ab928b3ec0a"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500809 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.500833 4769 scope.go:117] "RemoveContainer" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.504812 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" exitCode=0 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.504873 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.509080 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" event={"ID":"60fa7062-c4e9-4700-88e1-af5262989c6f","Type":"ContainerDied","Data":"4281687c125bb60dc1e9c561adac44c125c994b9787a7a132375bd1d9a17e1e3"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.509121 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4281687c125bb60dc1e9c561adac44c125c994b9787a7a132375bd1d9a17e1e3" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.509200 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cg5m6" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519696 4769 generic.go:334] "Generic (PLEG): container finished" podID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" exitCode=0 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519735 4769 generic.go:334] "Generic (PLEG): container finished" podID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" exitCode=143 Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519760 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerDied","Data":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerDied","Data":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.519935 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a025db2-7758-45ec-a6dc-d5bbd07e339b","Type":"ContainerDied","Data":"dcd8422210c770f204f5cc303d111ef48b5faf478309f593879a84852fa5cb77"} Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.520514 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.542918 4769 scope.go:117] "RemoveContainer" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.543359 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1\": container with ID starting with 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 not found: ID does not exist" containerID="936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.543386 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1"} err="failed to get container status \"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1\": rpc error: code = NotFound desc = could not find container \"936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1\": container with ID starting with 936c9f73bbce73a5f4e62ca042688b2e127679bb594e2bf6053e27831d6b26d1 not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.543405 4769 scope.go:117] "RemoveContainer" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.550219 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.565190 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.577884 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.585683 4769 scope.go:117] "RemoveContainer" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.587383 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588014 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588089 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588112 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588121 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588144 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerName="nova-manage" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588153 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerName="nova-manage" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588171 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588179 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.588196 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerName="nova-cell1-conductor-db-sync" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588205 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerName="nova-cell1-conductor-db-sync" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588455 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" containerName="nova-manage" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588478 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" containerName="nova-scheduler-scheduler" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588494 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" containerName="nova-cell1-conductor-db-sync" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588512 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-metadata" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.588525 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" containerName="nova-metadata-log" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.589461 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.596106 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.597698 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.607407 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.610076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.612577 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.623993 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.633894 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.636184 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.640361 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.640655 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.641916 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.652848 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.655179 4769 scope.go:117] "RemoveContainer" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.657474 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": container with ID starting with e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e not found: ID does not exist" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.657521 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} err="failed to get container status \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": rpc error: code = NotFound desc = could not find container \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": container with ID starting with e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.657550 4769 scope.go:117] "RemoveContainer" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: E0122 14:03:57.658014 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": container with ID starting with 8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef not found: ID does not exist" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658051 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} err="failed to get container status \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": rpc error: code = NotFound desc = could not find container \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": container with ID starting with 8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658080 4769 scope.go:117] "RemoveContainer" containerID="e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658364 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e"} err="failed to get container status \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": rpc error: code = NotFound desc = could not find container \"e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e\": container with ID starting with e52c9103648a0a34c7603b656d8929a8feba8a4d8ec58efa070b2b3c3423b00e not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658397 4769 scope.go:117] "RemoveContainer" containerID="8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.658766 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef"} err="failed to get container status \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": rpc error: code = NotFound desc = could not find container \"8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef\": container with ID starting with 8c789004b4d456cad7d8a9e052bed8c52200c81dfed6033126bfe22fc57a38ef not found: ID does not exist" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714538 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714607 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714648 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714801 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnflv\" (UniqueName: \"kubernetes.io/projected/e291c368-66b3-42b3-ad52-e3cd93471116-kube-api-access-vnflv\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.714873 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.816810 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.816853 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.816997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817097 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817254 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817446 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817560 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnflv\" (UniqueName: \"kubernetes.io/projected/e291c368-66b3-42b3-ad52-e3cd93471116-kube-api-access-vnflv\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817681 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817749 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.817847 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.821230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.821415 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e291c368-66b3-42b3-ad52-e3cd93471116-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.821567 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.822042 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.832598 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnflv\" (UniqueName: \"kubernetes.io/projected/e291c368-66b3-42b3-ad52-e3cd93471116-kube-api-access-vnflv\") pod \"nova-cell1-conductor-0\" (UID: \"e291c368-66b3-42b3-ad52-e3cd93471116\") " pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.833353 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"nova-scheduler-0\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919579 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919654 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919722 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.919754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.920151 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.920236 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.923692 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.924229 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.924729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.936417 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"nova-metadata-0\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " pod="openstack/nova-metadata-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.953747 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:57 crc kubenswrapper[4769]: I0122 14:03:57.974057 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.371511 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.516239 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.524144 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.530559 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerStarted","Data":"572df80009e2badcb09d845c35585498e31a50e4449686f5a44d8ee1e3d26270"} Jan 22 14:03:58 crc kubenswrapper[4769]: W0122 14:03:58.539359 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c7cab01_0731_4a76_a6d5_b6d0905b2386.slice/crio-8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855 WatchSource:0}: Error finding container 8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855: Status 404 returned error can't find the container with id 8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855 Jan 22 14:03:58 crc kubenswrapper[4769]: W0122 14:03:58.541605 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode291c368_66b3_42b3_ad52_e3cd93471116.slice/crio-24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0 WatchSource:0}: Error finding container 24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0: Status 404 returned error can't find the container with id 24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0 Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.900169 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a025db2-7758-45ec-a6dc-d5bbd07e339b" path="/var/lib/kubelet/pods/7a025db2-7758-45ec-a6dc-d5bbd07e339b/volumes" Jan 22 14:03:58 crc kubenswrapper[4769]: I0122 14:03:58.900855 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9c060e2-5b33-4452-bc58-2ce6e9f865d4" path="/var/lib/kubelet/pods/c9c060e2-5b33-4452-bc58-2ce6e9f865d4/volumes" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.545214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e291c368-66b3-42b3-ad52-e3cd93471116","Type":"ContainerStarted","Data":"b72fd79a23896da108be81c426ccddd24e1e3a48d1f49aceeabe6aea1b1d092e"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.545638 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e291c368-66b3-42b3-ad52-e3cd93471116","Type":"ContainerStarted","Data":"24c46357ea9a339f1f1d348b7536063beee6aeea67da31590b33fcf5c98dd7a0"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.545676 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.547240 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerStarted","Data":"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.549606 4769 generic.go:334] "Generic (PLEG): container finished" podID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerID="05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631" exitCode=0 Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.549670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerDied","Data":"05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.551372 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerStarted","Data":"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.551406 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerStarted","Data":"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.551424 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerStarted","Data":"8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855"} Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.566392 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.5663753209999998 podStartE2EDuration="2.566375321s" podCreationTimestamp="2026-01-22 14:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:59.562480966 +0000 UTC m=+1218.973590905" watchObservedRunningTime="2026-01-22 14:03:59.566375321 +0000 UTC m=+1218.977485250" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.596771 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.596752736 podStartE2EDuration="2.596752736s" podCreationTimestamp="2026-01-22 14:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:59.585280484 +0000 UTC m=+1218.996390413" watchObservedRunningTime="2026-01-22 14:03:59.596752736 +0000 UTC m=+1219.007862665" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.690605 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.719131 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.719114189 podStartE2EDuration="2.719114189s" podCreationTimestamp="2026-01-22 14:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:03:59.610638733 +0000 UTC m=+1219.021748662" watchObservedRunningTime="2026-01-22 14:03:59.719114189 +0000 UTC m=+1219.130224118" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.862911 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863314 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863380 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863470 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") pod \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\" (UID: \"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d\") " Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.863810 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs" (OuterVolumeSpecName: "logs") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.864072 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.868695 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl" (OuterVolumeSpecName: "kube-api-access-b4xgl") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "kube-api-access-b4xgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.889906 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data" (OuterVolumeSpecName: "config-data") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.903985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" (UID: "0a87cdd0-cc09-4004-90bf-bbe9bd9b453d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.967385 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4xgl\" (UniqueName: \"kubernetes.io/projected/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-kube-api-access-b4xgl\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.967445 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:03:59 crc kubenswrapper[4769]: I0122 14:03:59.967459 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.561109 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.561313 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0a87cdd0-cc09-4004-90bf-bbe9bd9b453d","Type":"ContainerDied","Data":"7522f136416e24ddb1e2da868b4df82fccac17698bad3fc0cffb8764c95aa35e"} Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.562036 4769 scope.go:117] "RemoveContainer" containerID="05fba83fc66875a0c66f3a2ceadc7ebd73ed593ba2c2b3f6ecd6111b7621b631" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.597369 4769 scope.go:117] "RemoveContainer" containerID="6234a0446d758d662f481f2255e5c0d82c8486e9bd8315786bd7329443cc3434" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.629221 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.663572 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.676914 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: E0122 14:04:00.677453 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677477 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" Jan 22 14:04:00 crc kubenswrapper[4769]: E0122 14:04:00.677500 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677507 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677684 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-log" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.677704 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" containerName="nova-api-api" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.678654 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.681204 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.688185 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.787683 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.789896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.789936 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.790078 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896329 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896763 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896864 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.896918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.897989 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.905968 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a87cdd0-cc09-4004-90bf-bbe9bd9b453d" path="/var/lib/kubelet/pods/0a87cdd0-cc09-4004-90bf-bbe9bd9b453d/volumes" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.909540 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.924109 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:00 crc kubenswrapper[4769]: I0122 14:04:00.931665 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " pod="openstack/nova-api-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.020068 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.133914 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203499 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203583 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203634 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203725 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203816 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.203912 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") pod \"2da17df6-1c4c-453a-9943-4a44e8a14664\" (UID: \"2da17df6-1c4c-453a-9943-4a44e8a14664\") " Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.204971 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.205156 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.208123 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq" (OuterVolumeSpecName: "kube-api-access-rqvxq") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "kube-api-access-rqvxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.210721 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts" (OuterVolumeSpecName: "scripts") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.239872 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.305776 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310495 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310528 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310542 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310553 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqvxq\" (UniqueName: \"kubernetes.io/projected/2da17df6-1c4c-453a-9943-4a44e8a14664-kube-api-access-rqvxq\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310564 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.310573 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2da17df6-1c4c-453a-9943-4a44e8a14664-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.312865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data" (OuterVolumeSpecName: "config-data") pod "2da17df6-1c4c-453a-9943-4a44e8a14664" (UID: "2da17df6-1c4c-453a-9943-4a44e8a14664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.412443 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2da17df6-1c4c-453a-9943-4a44e8a14664-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.489208 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.572160 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerStarted","Data":"35d5b0508fa43c69ed0a25708ff2e8f1c73a876bc675cab299797220908d7f38"} Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.578912 4769 generic.go:334] "Generic (PLEG): container finished" podID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" exitCode=0 Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.578961 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5"} Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.578994 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2da17df6-1c4c-453a-9943-4a44e8a14664","Type":"ContainerDied","Data":"63dc06d195b1c97ecfdd599025f891b09dac847761b101705571822c9d3ef1a0"} Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.579016 4769 scope.go:117] "RemoveContainer" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.579157 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.613811 4769 scope.go:117] "RemoveContainer" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.620937 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.633961 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.643678 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644289 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644316 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644331 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644338 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644368 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644377 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.644388 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644395 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644556 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-notification-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644572 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="ceilometer-central-agent" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644589 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="proxy-httpd" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.644603 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" containerName="sg-core" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.646290 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.650357 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.650462 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.650563 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.686802 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720590 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720696 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720956 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.720999 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.721022 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.721072 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.721154 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.799624 4769 scope.go:117] "RemoveContainer" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822095 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822142 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822200 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822247 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.822378 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.823363 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.823687 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.828438 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.829083 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.830032 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.838977 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.839691 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.842230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"ceilometer-0\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " pod="openstack/ceilometer-0" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.857985 4769 scope.go:117] "RemoveContainer" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.892317 4769 scope.go:117] "RemoveContainer" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.892742 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533\": container with ID starting with 0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533 not found: ID does not exist" containerID="0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.892776 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533"} err="failed to get container status \"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533\": rpc error: code = NotFound desc = could not find container \"0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533\": container with ID starting with 0bb74bf9b515919f39e14655679413cd135c984d3d72697791b38e7390ffc533 not found: ID does not exist" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.892830 4769 scope.go:117] "RemoveContainer" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.893117 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947\": container with ID starting with 18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947 not found: ID does not exist" containerID="18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893142 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947"} err="failed to get container status \"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947\": rpc error: code = NotFound desc = could not find container \"18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947\": container with ID starting with 18e6c2922fc56fe03b8bd1a70aa73fd29a75c4ee02f29e129940eb6d615fd947 not found: ID does not exist" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893156 4769 scope.go:117] "RemoveContainer" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.893330 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5\": container with ID starting with b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5 not found: ID does not exist" containerID="b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893347 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5"} err="failed to get container status \"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5\": rpc error: code = NotFound desc = could not find container \"b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5\": container with ID starting with b5629e480d5f9bca2b9aefb9619e124dd88f058584573bab31d2157d72077ec5 not found: ID does not exist" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893361 4769 scope.go:117] "RemoveContainer" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" Jan 22 14:04:01 crc kubenswrapper[4769]: E0122 14:04:01.893550 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46\": container with ID starting with 15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46 not found: ID does not exist" containerID="15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46" Jan 22 14:04:01 crc kubenswrapper[4769]: I0122 14:04:01.893567 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46"} err="failed to get container status \"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46\": rpc error: code = NotFound desc = could not find container \"15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46\": container with ID starting with 15eac8b08c32812a039810bb011b46bf61ee7b4ab7cdc8b93d737f5a20210c46 not found: ID does not exist" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.121924 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.553096 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:02 crc kubenswrapper[4769]: W0122 14:04:02.557217 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf902ed28_5882_448c_b405_0e73826dc0c4.slice/crio-f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970 WatchSource:0}: Error finding container f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970: Status 404 returned error can't find the container with id f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970 Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.589117 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerStarted","Data":"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497"} Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.589170 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerStarted","Data":"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f"} Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.591854 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970"} Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.616332 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.61631101 podStartE2EDuration="2.61631101s" podCreationTimestamp="2026-01-22 14:04:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:02.612251989 +0000 UTC m=+1222.023361928" watchObservedRunningTime="2026-01-22 14:04:02.61631101 +0000 UTC m=+1222.027420939" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.900273 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da17df6-1c4c-453a-9943-4a44e8a14664" path="/var/lib/kubelet/pods/2da17df6-1c4c-453a-9943-4a44e8a14664/volumes" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.920846 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.974738 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:02 crc kubenswrapper[4769]: I0122 14:04:02.974843 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:03 crc kubenswrapper[4769]: I0122 14:04:03.603817 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff"} Jan 22 14:04:04 crc kubenswrapper[4769]: I0122 14:04:04.614124 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1"} Jan 22 14:04:04 crc kubenswrapper[4769]: I0122 14:04:04.905646 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 22 14:04:05 crc kubenswrapper[4769]: I0122 14:04:05.624892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67"} Jan 22 14:04:06 crc kubenswrapper[4769]: I0122 14:04:06.642490 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerStarted","Data":"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d"} Jan 22 14:04:06 crc kubenswrapper[4769]: I0122 14:04:06.643034 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:04:06 crc kubenswrapper[4769]: I0122 14:04:06.665675 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.489903928 podStartE2EDuration="5.665652051s" podCreationTimestamp="2026-01-22 14:04:01 +0000 UTC" firstStartedPulling="2026-01-22 14:04:02.559317912 +0000 UTC m=+1221.970427841" lastFinishedPulling="2026-01-22 14:04:05.735066035 +0000 UTC m=+1225.146175964" observedRunningTime="2026-01-22 14:04:06.663187344 +0000 UTC m=+1226.074297273" watchObservedRunningTime="2026-01-22 14:04:06.665652051 +0000 UTC m=+1226.076761980" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.920753 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.950703 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.975403 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:07 crc kubenswrapper[4769]: I0122 14:04:07.976930 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.002511 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.707040 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.984980 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:08 crc kubenswrapper[4769]: I0122 14:04:08.984980 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:11 crc kubenswrapper[4769]: I0122 14:04:11.022001 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:11 crc kubenswrapper[4769]: I0122 14:04:11.022571 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:12 crc kubenswrapper[4769]: I0122 14:04:12.104011 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:12 crc kubenswrapper[4769]: I0122 14:04:12.104054 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.738414 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753513 4769 generic.go:334] "Generic (PLEG): container finished" podID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" exitCode=137 Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753727 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerDied","Data":"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516"} Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f1f2c596-25ff-4c08-9b23-b90aca949e45","Type":"ContainerDied","Data":"8522cdc8b7e7fadf9198c4e41afe42ad7a56383c9af88b3279cb3345f6237754"} Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753929 4769 scope.go:117] "RemoveContainer" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.753998 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.779198 4769 scope.go:117] "RemoveContainer" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" Jan 22 14:04:17 crc kubenswrapper[4769]: E0122 14:04:17.779773 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516\": container with ID starting with 8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516 not found: ID does not exist" containerID="8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.780222 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516"} err="failed to get container status \"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516\": rpc error: code = NotFound desc = could not find container \"8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516\": container with ID starting with 8f9e70a0f1c97e8735286a0ca726202c1244aa104f63b81296d54b23717fa516 not found: ID does not exist" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.836012 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") pod \"f1f2c596-25ff-4c08-9b23-b90aca949e45\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.836138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") pod \"f1f2c596-25ff-4c08-9b23-b90aca949e45\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.836195 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") pod \"f1f2c596-25ff-4c08-9b23-b90aca949e45\" (UID: \"f1f2c596-25ff-4c08-9b23-b90aca949e45\") " Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.842982 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt" (OuterVolumeSpecName: "kube-api-access-lbnbt") pod "f1f2c596-25ff-4c08-9b23-b90aca949e45" (UID: "f1f2c596-25ff-4c08-9b23-b90aca949e45"). InnerVolumeSpecName "kube-api-access-lbnbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.865146 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data" (OuterVolumeSpecName: "config-data") pod "f1f2c596-25ff-4c08-9b23-b90aca949e45" (UID: "f1f2c596-25ff-4c08-9b23-b90aca949e45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.873723 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1f2c596-25ff-4c08-9b23-b90aca949e45" (UID: "f1f2c596-25ff-4c08-9b23-b90aca949e45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.938783 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbnbt\" (UniqueName: \"kubernetes.io/projected/f1f2c596-25ff-4c08-9b23-b90aca949e45-kube-api-access-lbnbt\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.938834 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.938848 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1f2c596-25ff-4c08-9b23-b90aca949e45-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.980629 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.980703 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.987292 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:04:17 crc kubenswrapper[4769]: I0122 14:04:17.994436 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.092401 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.103191 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.158595 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: E0122 14:04:18.159142 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.159166 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.159449 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" containerName="nova-cell1-novncproxy-novncproxy" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.160392 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.162482 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.163919 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.164784 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.184932 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.245586 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.245764 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.246093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.246153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc56f\" (UniqueName: \"kubernetes.io/projected/5697f97b-b5e1-4e54-aebb-540e12b7953c-kube-api-access-rc56f\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.246214 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.347914 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348069 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc56f\" (UniqueName: \"kubernetes.io/projected/5697f97b-b5e1-4e54-aebb-540e12b7953c-kube-api-access-rc56f\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348167 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.348324 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.352118 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.354504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.354545 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.354773 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5697f97b-b5e1-4e54-aebb-540e12b7953c-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.380564 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc56f\" (UniqueName: \"kubernetes.io/projected/5697f97b-b5e1-4e54-aebb-540e12b7953c-kube-api-access-rc56f\") pod \"nova-cell1-novncproxy-0\" (UID: \"5697f97b-b5e1-4e54-aebb-540e12b7953c\") " pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.478398 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.897132 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f2c596-25ff-4c08-9b23-b90aca949e45" path="/var/lib/kubelet/pods/f1f2c596-25ff-4c08-9b23-b90aca949e45/volumes" Jan 22 14:04:18 crc kubenswrapper[4769]: I0122 14:04:18.911052 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 22 14:04:19 crc kubenswrapper[4769]: I0122 14:04:19.777142 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5697f97b-b5e1-4e54-aebb-540e12b7953c","Type":"ContainerStarted","Data":"481d01771636f93b7db8286bb4ce6448c8a9383a97aa209cbcd19cf2d2c579f7"} Jan 22 14:04:19 crc kubenswrapper[4769]: I0122 14:04:19.777509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5697f97b-b5e1-4e54-aebb-540e12b7953c","Type":"ContainerStarted","Data":"18ee91c6bb3320634d3c484df9199d7d5d8c792104c4053b2eb75a866e163bfd"} Jan 22 14:04:19 crc kubenswrapper[4769]: I0122 14:04:19.800006 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.7999820789999998 podStartE2EDuration="1.799982079s" podCreationTimestamp="2026-01-22 14:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:19.795058005 +0000 UTC m=+1239.206167934" watchObservedRunningTime="2026-01-22 14:04:19.799982079 +0000 UTC m=+1239.211092008" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.024539 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.025119 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.027696 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.028098 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.796511 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.800313 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.990459 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-n9fh2"] Jan 22 14:04:21 crc kubenswrapper[4769]: I0122 14:04:21.992376 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.016378 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-n9fh2"] Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041656 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041773 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041891 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.041929 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-config\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.042004 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.042048 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzspf\" (UniqueName: \"kubernetes.io/projected/6862cbe8-3411-44fc-a4a8-429c3551f695-kube-api-access-lzspf\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144468 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144641 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.144924 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-config\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.146855 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.146922 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzspf\" (UniqueName: \"kubernetes.io/projected/6862cbe8-3411-44fc-a4a8-429c3551f695-kube-api-access-lzspf\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.150663 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-config\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.150860 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.151007 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.151977 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.154608 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6862cbe8-3411-44fc-a4a8-429c3551f695-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.191680 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzspf\" (UniqueName: \"kubernetes.io/projected/6862cbe8-3411-44fc-a4a8-429c3551f695-kube-api-access-lzspf\") pod \"dnsmasq-dns-59cf4bdb65-n9fh2\" (UID: \"6862cbe8-3411-44fc-a4a8-429c3551f695\") " pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.335469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:22 crc kubenswrapper[4769]: I0122 14:04:22.859008 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-n9fh2"] Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.479367 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.825172 4769 generic.go:334] "Generic (PLEG): container finished" podID="6862cbe8-3411-44fc-a4a8-429c3551f695" containerID="d15cdae013c4e526c860afdacd192eefc8491c63ed7c25b7d223d7e76a121a74" exitCode=0 Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.827387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" event={"ID":"6862cbe8-3411-44fc-a4a8-429c3551f695","Type":"ContainerDied","Data":"d15cdae013c4e526c860afdacd192eefc8491c63ed7c25b7d223d7e76a121a74"} Jan 22 14:04:23 crc kubenswrapper[4769]: I0122 14:04:23.827431 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" event={"ID":"6862cbe8-3411-44fc-a4a8-429c3551f695","Type":"ContainerStarted","Data":"c02cf0d798ec3b1583d130341ef91b5b9df6cb6c8b83ff441852191458dde04b"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.387611 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388267 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" containerID="cri-o://33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388344 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" containerID="cri-o://a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388371 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" containerID="cri-o://95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.388490 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" containerID="cri-o://5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.409429 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.488862 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836508 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" exitCode=0 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836535 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" exitCode=2 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836544 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" exitCode=0 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836582 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836624 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.836636 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.839668 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" event={"ID":"6862cbe8-3411-44fc-a4a8-429c3551f695","Type":"ContainerStarted","Data":"e597c1032f3a38d027e48757274e85a8dd6060da78afc828fd2ba0d1b9fe0639"} Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.839841 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" containerID="cri-o://037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.839884 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" containerID="cri-o://fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" gracePeriod=30 Jan 22 14:04:24 crc kubenswrapper[4769]: I0122 14:04:24.866722 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" podStartSLOduration=3.866701954 podStartE2EDuration="3.866701954s" podCreationTimestamp="2026-01-22 14:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:24.857831073 +0000 UTC m=+1244.268941012" watchObservedRunningTime="2026-01-22 14:04:24.866701954 +0000 UTC m=+1244.277811883" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.720363 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728672 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728839 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.728874 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729378 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729197 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729424 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729464 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729666 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.729715 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") pod \"f902ed28-5882-448c-b405-0e73826dc0c4\" (UID: \"f902ed28-5882-448c-b405-0e73826dc0c4\") " Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.730091 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.730118 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f902ed28-5882-448c-b405-0e73826dc0c4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.749944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt" (OuterVolumeSpecName: "kube-api-access-tq2rt") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "kube-api-access-tq2rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.754102 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts" (OuterVolumeSpecName: "scripts") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.786243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.805907 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831211 4769 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831267 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq2rt\" (UniqueName: \"kubernetes.io/projected/f902ed28-5882-448c-b405-0e73826dc0c4-kube-api-access-tq2rt\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831279 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.831291 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.864927 4769 generic.go:334] "Generic (PLEG): container finished" podID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" exitCode=143 Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.865341 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerDied","Data":"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f"} Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868548 4769 generic.go:334] "Generic (PLEG): container finished" podID="f902ed28-5882-448c-b405-0e73826dc0c4" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" exitCode=0 Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868730 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1"} Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868778 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868806 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f902ed28-5882-448c-b405-0e73826dc0c4","Type":"ContainerDied","Data":"f3fd9999ec3d1b650894d27d0f996af4e6075d28c706e9c80c518e6174d1b970"} Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.868837 4769 scope.go:117] "RemoveContainer" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.869191 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.892126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.906136 4769 scope.go:117] "RemoveContainer" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.907412 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data" (OuterVolumeSpecName: "config-data") pod "f902ed28-5882-448c-b405-0e73826dc0c4" (UID: "f902ed28-5882-448c-b405-0e73826dc0c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.925617 4769 scope.go:117] "RemoveContainer" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.932839 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.932872 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f902ed28-5882-448c-b405-0e73826dc0c4-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.945204 4769 scope.go:117] "RemoveContainer" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963155 4769 scope.go:117] "RemoveContainer" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.963541 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d\": container with ID starting with 5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d not found: ID does not exist" containerID="5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963574 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d"} err="failed to get container status \"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d\": rpc error: code = NotFound desc = could not find container \"5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d\": container with ID starting with 5654e44d205bf51e2ac41880b1659a570be4aa639cd373d4340517b54e17813d not found: ID does not exist" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963597 4769 scope.go:117] "RemoveContainer" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.963901 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67\": container with ID starting with a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67 not found: ID does not exist" containerID="a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963942 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67"} err="failed to get container status \"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67\": rpc error: code = NotFound desc = could not find container \"a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67\": container with ID starting with a43b713b07a3508e1dc013eed4e611717b51268adedfff171c8a279077a46f67 not found: ID does not exist" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.963973 4769 scope.go:117] "RemoveContainer" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.964359 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1\": container with ID starting with 95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1 not found: ID does not exist" containerID="95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.964384 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1"} err="failed to get container status \"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1\": rpc error: code = NotFound desc = could not find container \"95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1\": container with ID starting with 95f43db78fe49c037a1c4098c6db959b4ddbe876db94b04f3ced72ff0dcb8fc1 not found: ID does not exist" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.964401 4769 scope.go:117] "RemoveContainer" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" Jan 22 14:04:25 crc kubenswrapper[4769]: E0122 14:04:25.964757 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff\": container with ID starting with 33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff not found: ID does not exist" containerID="33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff" Jan 22 14:04:25 crc kubenswrapper[4769]: I0122 14:04:25.964777 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff"} err="failed to get container status \"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff\": rpc error: code = NotFound desc = could not find container \"33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff\": container with ID starting with 33954ebaa2ae08febdaf6d8e5ed6dbd06836ec70ff2e59b7176a4bf1239212ff not found: ID does not exist" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.199491 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.208580 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.225239 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.225812 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.225885 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.225941 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.225992 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.226059 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226110 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.226207 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226260 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226481 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="sg-core" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226553 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="proxy-httpd" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226617 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-notification-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.226676 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" containerName="ceilometer-central-agent" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.228371 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.231123 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.231627 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.233382 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236673 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236741 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236768 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236781 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.236913 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.237016 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.249141 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.333491 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:26 crc kubenswrapper[4769]: E0122 14:04:26.334913 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-zdz55 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="9ac75153-4f8f-47c2-82c5-3239847b908a" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340047 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340100 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340168 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.340738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341032 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341094 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341195 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.341475 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.345017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.350617 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.351045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.352161 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.354491 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.354672 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.364263 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"ceilometer-0\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.883877 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.897034 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f902ed28-5882-448c-b405-0e73826dc0c4" path="/var/lib/kubelet/pods/f902ed28-5882-448c-b405-0e73826dc0c4/volumes" Jan 22 14:04:26 crc kubenswrapper[4769]: I0122 14:04:26.897607 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055240 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055320 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055382 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055555 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055582 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055604 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055677 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.055701 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") pod \"9ac75153-4f8f-47c2-82c5-3239847b908a\" (UID: \"9ac75153-4f8f-47c2-82c5-3239847b908a\") " Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.057836 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.061319 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.070024 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55" (OuterVolumeSpecName: "kube-api-access-zdz55") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "kube-api-access-zdz55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.071871 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.074984 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.075104 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data" (OuterVolumeSpecName: "config-data") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.075911 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts" (OuterVolumeSpecName: "scripts") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.089697 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9ac75153-4f8f-47c2-82c5-3239847b908a" (UID: "9ac75153-4f8f-47c2-82c5-3239847b908a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159340 4769 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159379 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159399 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159410 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159421 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159432 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac75153-4f8f-47c2-82c5-3239847b908a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159441 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ac75153-4f8f-47c2-82c5-3239847b908a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.159451 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdz55\" (UniqueName: \"kubernetes.io/projected/9ac75153-4f8f-47c2-82c5-3239847b908a-kube-api-access-zdz55\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.892128 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.957172 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:27 crc kubenswrapper[4769]: I0122 14:04:27.983162 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.003312 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.009179 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.011797 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.013366 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.013622 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.014109 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.183128 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184170 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184232 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-config-data\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184259 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184287 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184339 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184373 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pmx\" (UniqueName: \"kubernetes.io/projected/d9fe083b-8f17-4c51-87ff-a8a7f447190d-kube-api-access-t4pmx\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.184407 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-scripts\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.302951 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303021 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4pmx\" (UniqueName: \"kubernetes.io/projected/d9fe083b-8f17-4c51-87ff-a8a7f447190d-kube-api-access-t4pmx\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303193 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-scripts\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303384 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.303447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-config-data\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.307087 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-log-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.307156 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d9fe083b-8f17-4c51-87ff-a8a7f447190d-run-httpd\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.311078 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-config-data\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.311366 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.312400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.313038 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.322279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9fe083b-8f17-4c51-87ff-a8a7f447190d-scripts\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.322866 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4pmx\" (UniqueName: \"kubernetes.io/projected/d9fe083b-8f17-4c51-87ff-a8a7f447190d-kube-api-access-t4pmx\") pod \"ceilometer-0\" (UID: \"d9fe083b-8f17-4c51-87ff-a8a7f447190d\") " pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.431510 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.479309 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.501079 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.513283 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712000 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712336 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712428 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.712458 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") pod \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\" (UID: \"c364fe67-27fa-404c-aef8-7c9daeda4c5b\") " Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.713146 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs" (OuterVolumeSpecName: "logs") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.719401 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw" (OuterVolumeSpecName: "kube-api-access-87qhw") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "kube-api-access-87qhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.751097 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.753013 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data" (OuterVolumeSpecName: "config-data") pod "c364fe67-27fa-404c-aef8-7c9daeda4c5b" (UID: "c364fe67-27fa-404c-aef8-7c9daeda4c5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814521 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814564 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87qhw\" (UniqueName: \"kubernetes.io/projected/c364fe67-27fa-404c-aef8-7c9daeda4c5b-kube-api-access-87qhw\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814578 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c364fe67-27fa-404c-aef8-7c9daeda4c5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.814590 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c364fe67-27fa-404c-aef8-7c9daeda4c5b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.900382 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac75153-4f8f-47c2-82c5-3239847b908a" path="/var/lib/kubelet/pods/9ac75153-4f8f-47c2-82c5-3239847b908a/volumes" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.907730 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: W0122 14:04:28.910769 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9fe083b_8f17_4c51_87ff_a8a7f447190d.slice/crio-c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5 WatchSource:0}: Error finding container c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5: Status 404 returned error can't find the container with id c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5 Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.915219 4769 generic.go:334] "Generic (PLEG): container finished" podID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" exitCode=0 Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916416 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916924 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerDied","Data":"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497"} Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c364fe67-27fa-404c-aef8-7c9daeda4c5b","Type":"ContainerDied","Data":"35d5b0508fa43c69ed0a25708ff2e8f1c73a876bc675cab299797220908d7f38"} Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.916982 4769 scope.go:117] "RemoveContainer" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.944077 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.953524 4769 scope.go:117] "RemoveContainer" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.957230 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.976273 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.983847 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.984282 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984305 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.984320 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984328 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984541 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-api" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.984562 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" containerName="nova-api-log" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.985500 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.987155 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.987770 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.991558 4769 scope.go:117] "RemoveContainer" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.992081 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.992239 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497\": container with ID starting with fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497 not found: ID does not exist" containerID="fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.992357 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497"} err="failed to get container status \"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497\": rpc error: code = NotFound desc = could not find container \"fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497\": container with ID starting with fd28cc9550ecd676226cd8246a263caa8c331889e275074ef01524152cabf497 not found: ID does not exist" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.992475 4769 scope.go:117] "RemoveContainer" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" Jan 22 14:04:28 crc kubenswrapper[4769]: E0122 14:04:28.993518 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f\": container with ID starting with 037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f not found: ID does not exist" containerID="037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f" Jan 22 14:04:28 crc kubenswrapper[4769]: I0122 14:04:28.993659 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f"} err="failed to get container status \"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f\": rpc error: code = NotFound desc = could not find container \"037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f\": container with ID starting with 037f7eeaac4d3e4d9fba0b70e1ebf52b58b8701e1639d6de544044d9a9f39e7f not found: ID does not exist" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.031854 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.121918 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122015 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122108 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122190 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.122517 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.174278 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.175564 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.177256 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.178307 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.182575 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224470 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224515 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224538 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224617 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.224656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.228681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.229246 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.229316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.229829 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.232818 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.244160 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"nova-api-0\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.325883 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.325946 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.325991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.326064 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.330385 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428938 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.428984 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.432224 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.432747 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.434226 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.463765 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"nova-cell1-cell-mapping-5j7zn\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.496881 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.824269 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.930259 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"2c2b5612a9fd6512e6cf8e192ab9515d44f14b5c4425fd825610c65da8dc8927"} Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.930569 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"c26307cb5eafb6426a885f78d8ae320c7745045c29dc2fd8de9b728b092410f5"} Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.934301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerStarted","Data":"b2ffc07def655a31961f7d5ac693137c0965a0d22c046824a655fd36ee880dad"} Jan 22 14:04:29 crc kubenswrapper[4769]: I0122 14:04:29.967707 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-5j7zn"] Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.922070 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c364fe67-27fa-404c-aef8-7c9daeda4c5b" path="/var/lib/kubelet/pods/c364fe67-27fa-404c-aef8-7c9daeda4c5b/volumes" Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.949554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"d550481eae244b0acb11940c894759b33a66e95371413ba92a66003adbc70c4b"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.951888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerStarted","Data":"8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.951948 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerStarted","Data":"633a8acd221448532778ab148a9c13fa97affd050eec96d8e6cfe7a7d272922d"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.955875 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerStarted","Data":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.955924 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerStarted","Data":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} Jan 22 14:04:30 crc kubenswrapper[4769]: I0122 14:04:30.995510 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9954872850000003 podStartE2EDuration="2.995487285s" podCreationTimestamp="2026-01-22 14:04:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:30.990227063 +0000 UTC m=+1250.401336992" watchObservedRunningTime="2026-01-22 14:04:30.995487285 +0000 UTC m=+1250.406597214" Jan 22 14:04:31 crc kubenswrapper[4769]: I0122 14:04:31.012469 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-5j7zn" podStartSLOduration=2.012450206 podStartE2EDuration="2.012450206s" podCreationTimestamp="2026-01-22 14:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:31.009586349 +0000 UTC m=+1250.420696288" watchObservedRunningTime="2026-01-22 14:04:31.012450206 +0000 UTC m=+1250.423560135" Jan 22 14:04:31 crc kubenswrapper[4769]: I0122 14:04:31.978582 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"10a5117e729b092a3469b25a028bd64aa98c9e9204cca4f30a629651279581b9"} Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.336993 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-n9fh2" Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.421705 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.421952 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" containerID="cri-o://097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" gracePeriod=10 Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.950107 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993384 4769 generic.go:334] "Generic (PLEG): container finished" podID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" exitCode=0 Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993464 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerDied","Data":"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb"} Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" event={"ID":"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c","Type":"ContainerDied","Data":"07ff2a18726b3f734621e81451a91539db3bacf8cce99d939c1f38660bd71e0c"} Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993528 4769 scope.go:117] "RemoveContainer" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" Jan 22 14:04:32 crc kubenswrapper[4769]: I0122 14:04:32.993694 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-hb2xg" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:32.998460 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d9fe083b-8f17-4c51-87ff-a8a7f447190d","Type":"ContainerStarted","Data":"a90ef393236dbedf0a5581ef2530d218440f83d072d4ee775121bc524641d3eb"} Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:32.999487 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.017101 4769 scope.go:117] "RemoveContainer" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.026203 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.026253 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.026294 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.032907 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.62366664 podStartE2EDuration="6.032891743s" podCreationTimestamp="2026-01-22 14:04:27 +0000 UTC" firstStartedPulling="2026-01-22 14:04:28.917412454 +0000 UTC m=+1248.328522383" lastFinishedPulling="2026-01-22 14:04:32.326637557 +0000 UTC m=+1251.737747486" observedRunningTime="2026-01-22 14:04:33.027147887 +0000 UTC m=+1252.438257816" watchObservedRunningTime="2026-01-22 14:04:33.032891743 +0000 UTC m=+1252.444001672" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.046860 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb" (OuterVolumeSpecName: "kube-api-access-8msgb") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "kube-api-access-8msgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.056669 4769 scope.go:117] "RemoveContainer" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" Jan 22 14:04:33 crc kubenswrapper[4769]: E0122 14:04:33.057202 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb\": container with ID starting with 097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb not found: ID does not exist" containerID="097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.057238 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb"} err="failed to get container status \"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb\": rpc error: code = NotFound desc = could not find container \"097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb\": container with ID starting with 097268cd9b4b048c77b3bed18c15fcbd5ff809f46cfef2a702c3dc0cab1091bb not found: ID does not exist" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.057265 4769 scope.go:117] "RemoveContainer" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" Jan 22 14:04:33 crc kubenswrapper[4769]: E0122 14:04:33.057733 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4\": container with ID starting with 5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4 not found: ID does not exist" containerID="5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.057762 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4"} err="failed to get container status \"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4\": rpc error: code = NotFound desc = could not find container \"5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4\": container with ID starting with 5ae6d8389b8fd75024e021ee39c4d142ba4295adb4f7e76df5657555a85574c4 not found: ID does not exist" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.093330 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.093438 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.128715 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.128813 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.128918 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") pod \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\" (UID: \"52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c\") " Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.129453 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msgb\" (UniqueName: \"kubernetes.io/projected/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-kube-api-access-8msgb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.129472 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.129483 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.185157 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config" (OuterVolumeSpecName: "config") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.189075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.194079 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" (UID: "52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.230364 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.230406 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-config\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.230417 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.339677 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:04:33 crc kubenswrapper[4769]: I0122 14:04:33.351342 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-hb2xg"] Jan 22 14:04:34 crc kubenswrapper[4769]: I0122 14:04:34.895087 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" path="/var/lib/kubelet/pods/52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c/volumes" Jan 22 14:04:36 crc kubenswrapper[4769]: I0122 14:04:36.024030 4769 generic.go:334] "Generic (PLEG): container finished" podID="4b01ed3a-6c71-4384-80a2-59814d125061" containerID="8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe" exitCode=0 Jan 22 14:04:36 crc kubenswrapper[4769]: I0122 14:04:36.024189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerDied","Data":"8cbd39a1426db3df58f12d00edd2c60b7040ef05de418ca23684e54739a301fe"} Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.346861 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505014 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.505352 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") pod \"4b01ed3a-6c71-4384-80a2-59814d125061\" (UID: \"4b01ed3a-6c71-4384-80a2-59814d125061\") " Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.519338 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq" (OuterVolumeSpecName: "kube-api-access-c2dqq") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "kube-api-access-c2dqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.525963 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts" (OuterVolumeSpecName: "scripts") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.536672 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.554002 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data" (OuterVolumeSpecName: "config-data") pod "4b01ed3a-6c71-4384-80a2-59814d125061" (UID: "4b01ed3a-6c71-4384-80a2-59814d125061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608212 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2dqq\" (UniqueName: \"kubernetes.io/projected/4b01ed3a-6c71-4384-80a2-59814d125061-kube-api-access-c2dqq\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608250 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608263 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:37 crc kubenswrapper[4769]: I0122 14:04:37.608276 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b01ed3a-6c71-4384-80a2-59814d125061-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.045978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-5j7zn" event={"ID":"4b01ed3a-6c71-4384-80a2-59814d125061","Type":"ContainerDied","Data":"633a8acd221448532778ab148a9c13fa97affd050eec96d8e6cfe7a7d272922d"} Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.046304 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633a8acd221448532778ab148a9c13fa97affd050eec96d8e6cfe7a7d272922d" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.046224 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-5j7zn" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.205545 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.205846 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" containerID="cri-o://ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.206336 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" containerID="cri-o://5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.224528 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.224740 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" containerID="cri-o://e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.268537 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.268805 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" containerID="cri-o://5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.268909 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" containerID="cri-o://c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" gracePeriod=30 Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.742959 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834291 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834656 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834915 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834940 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.834994 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") pod \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\" (UID: \"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397\") " Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.836284 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs" (OuterVolumeSpecName: "logs") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.839968 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96" (OuterVolumeSpecName: "kube-api-access-gjv96") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "kube-api-access-gjv96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.862642 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data" (OuterVolumeSpecName: "config-data") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.875325 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.885837 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.888522 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" (UID: "2f6bfbd9-5d31-4b63-9133-a3eebf0a8397"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936811 4769 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936853 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936863 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjv96\" (UniqueName: \"kubernetes.io/projected/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-kube-api-access-gjv96\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936873 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936882 4769 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:38 crc kubenswrapper[4769]: I0122 14:04:38.936890 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.057278 4769 generic.go:334] "Generic (PLEG): container finished" podID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" exitCode=143 Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.057334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerDied","Data":"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059337 4769 generic.go:334] "Generic (PLEG): container finished" podID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" exitCode=0 Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059366 4769 generic.go:334] "Generic (PLEG): container finished" podID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" exitCode=143 Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059384 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerDied","Data":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059398 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059409 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerDied","Data":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059424 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2f6bfbd9-5d31-4b63-9133-a3eebf0a8397","Type":"ContainerDied","Data":"b2ffc07def655a31961f7d5ac693137c0965a0d22c046824a655fd36ee880dad"} Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.059441 4769 scope.go:117] "RemoveContainer" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.083566 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.083979 4769 scope.go:117] "RemoveContainer" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.092609 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.102084 4769 scope.go:117] "RemoveContainer" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106468 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106884 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="init" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106902 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="init" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106917 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106924 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106937 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" containerName="nova-manage" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106943 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" containerName="nova-manage" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106969 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106974 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.106983 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.106990 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107148 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-log" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107157 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" containerName="nova-api-api" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107175 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b01ed3a-6c71-4384-80a2-59814d125061" containerName="nova-manage" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107181 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="52af5f4d-8bb3-47aa-99a6-2951d0fd5c4c" containerName="dnsmasq-dns" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.107181 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": container with ID starting with 5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031 not found: ID does not exist" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107230 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} err="failed to get container status \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": rpc error: code = NotFound desc = could not find container \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": container with ID starting with 5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107256 4769 scope.go:117] "RemoveContainer" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: E0122 14:04:39.107739 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": container with ID starting with ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0 not found: ID does not exist" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107774 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} err="failed to get container status \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": rpc error: code = NotFound desc = could not find container \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": container with ID starting with ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.107815 4769 scope.go:117] "RemoveContainer" containerID="5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108122 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108122 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031"} err="failed to get container status \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": rpc error: code = NotFound desc = could not find container \"5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031\": container with ID starting with 5badefe5dfc5dc53862d6dc8450236c1363f0c62d22db6a1b5d8bf02e416b031 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108383 4769 scope.go:117] "RemoveContainer" containerID="ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.108719 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0"} err="failed to get container status \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": rpc error: code = NotFound desc = could not find container \"ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0\": container with ID starting with ac772e03063571c02a3a23adbab8727363f22c749e8ca54b86ceb6aaea9b29c0 not found: ID does not exist" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.112886 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.112912 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.113157 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.116144 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241364 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241445 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103e0f8-85be-424c-a705-112fb70500b6-logs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241470 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7xf\" (UniqueName: \"kubernetes.io/projected/b103e0f8-85be-424c-a705-112fb70500b6-kube-api-access-gj7xf\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-config-data\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.241910 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.343547 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103e0f8-85be-424c-a705-112fb70500b6-logs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344654 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7xf\" (UniqueName: \"kubernetes.io/projected/b103e0f8-85be-424c-a705-112fb70500b6-kube-api-access-gj7xf\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344978 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-config-data\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.344983 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b103e0f8-85be-424c-a705-112fb70500b6-logs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.345226 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.349328 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.349428 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-public-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.349766 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.350969 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b103e0f8-85be-424c-a705-112fb70500b6-config-data\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.362938 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7xf\" (UniqueName: \"kubernetes.io/projected/b103e0f8-85be-424c-a705-112fb70500b6-kube-api-access-gj7xf\") pod \"nova-api-0\" (UID: \"b103e0f8-85be-424c-a705-112fb70500b6\") " pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.423215 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 22 14:04:39 crc kubenswrapper[4769]: I0122 14:04:39.896708 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 22 14:04:40 crc kubenswrapper[4769]: I0122 14:04:40.071428 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b103e0f8-85be-424c-a705-112fb70500b6","Type":"ContainerStarted","Data":"48002534c49abaab7671101ae0719c7c4c2022c7a6f39e05ab463a0a9e3f06b6"} Jan 22 14:04:40 crc kubenswrapper[4769]: I0122 14:04:40.898707 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f6bfbd9-5d31-4b63-9133-a3eebf0a8397" path="/var/lib/kubelet/pods/2f6bfbd9-5d31-4b63-9133-a3eebf0a8397/volumes" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.084621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b103e0f8-85be-424c-a705-112fb70500b6","Type":"ContainerStarted","Data":"bf6b3a13867858551c087c4bf5b47d3b9826f0aa7f5f9d104ae27cbd8c12b07d"} Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.084680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b103e0f8-85be-424c-a705-112fb70500b6","Type":"ContainerStarted","Data":"f46d18a68195a78bfa28ce1e6222943f0ee6b2ba742339cb83532f82af95e816"} Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.108505 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.108488121 podStartE2EDuration="2.108488121s" podCreationTimestamp="2026-01-22 14:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:41.104442381 +0000 UTC m=+1260.515552330" watchObservedRunningTime="2026-01-22 14:04:41.108488121 +0000 UTC m=+1260.519598050" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.856049 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992678 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992755 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992833 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992875 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.992940 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") pod \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\" (UID: \"5c7cab01-0731-4a76-a6d5-b6d0905b2386\") " Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.994906 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs" (OuterVolumeSpecName: "logs") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:04:41 crc kubenswrapper[4769]: I0122 14:04:41.999523 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb" (OuterVolumeSpecName: "kube-api-access-n7psb") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "kube-api-access-n7psb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.024118 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.026026 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data" (OuterVolumeSpecName: "config-data") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.054834 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5c7cab01-0731-4a76-a6d5-b6d0905b2386" (UID: "5c7cab01-0731-4a76-a6d5-b6d0905b2386"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095294 4769 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095354 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c7cab01-0731-4a76-a6d5-b6d0905b2386-logs\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095367 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095377 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c7cab01-0731-4a76-a6d5-b6d0905b2386-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.095388 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7psb\" (UniqueName: \"kubernetes.io/projected/5c7cab01-0731-4a76-a6d5-b6d0905b2386-kube-api-access-n7psb\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097351 4769 generic.go:334] "Generic (PLEG): container finished" podID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" exitCode=0 Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097418 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097451 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerDied","Data":"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c"} Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097501 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5c7cab01-0731-4a76-a6d5-b6d0905b2386","Type":"ContainerDied","Data":"8ea58d153112320153b0ab6e47deea2ca60609e453fb3c50cf4a5566adce1855"} Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.097519 4769 scope.go:117] "RemoveContainer" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.218905 4769 scope.go:117] "RemoveContainer" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.223416 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.237129 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251178 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.251636 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251662 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.251686 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251694 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251902 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-metadata" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.251922 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" containerName="nova-metadata-log" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.252981 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.257178 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.266419 4769 scope.go:117] "RemoveContainer" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.270972 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.270727 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c\": container with ID starting with c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c not found: ID does not exist" containerID="c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.273108 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c"} err="failed to get container status \"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c\": rpc error: code = NotFound desc = could not find container \"c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c\": container with ID starting with c9ef3086d0eab5a6024f2f27d8147bdef3796ef183a5e360249a426cc534010c not found: ID does not exist" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.273144 4769 scope.go:117] "RemoveContainer" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" Jan 22 14:04:42 crc kubenswrapper[4769]: E0122 14:04:42.274720 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b\": container with ID starting with 5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b not found: ID does not exist" containerID="5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.274770 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b"} err="failed to get container status \"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b\": rpc error: code = NotFound desc = could not find container \"5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b\": container with ID starting with 5f77e6a254e6237b524fe2cf9da977a96602a8070e3ffc2d54bbf6f07842e09b not found: ID does not exist" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.280554 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.401998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-config-data\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402046 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402117 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402177 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6fa05e3-584d-4c81-bef8-b5224b93fba3-logs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.402255 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6wv5\" (UniqueName: \"kubernetes.io/projected/a6fa05e3-584d-4c81-bef8-b5224b93fba3-kube-api-access-s6wv5\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504014 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-config-data\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504093 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504160 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6fa05e3-584d-4c81-bef8-b5224b93fba3-logs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504243 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6wv5\" (UniqueName: \"kubernetes.io/projected/a6fa05e3-584d-4c81-bef8-b5224b93fba3-kube-api-access-s6wv5\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.504834 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6fa05e3-584d-4c81-bef8-b5224b93fba3-logs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.508651 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-config-data\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.508903 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.511858 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6fa05e3-584d-4c81-bef8-b5224b93fba3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.522998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6wv5\" (UniqueName: \"kubernetes.io/projected/a6fa05e3-584d-4c81-bef8-b5224b93fba3-kube-api-access-s6wv5\") pod \"nova-metadata-0\" (UID: \"a6fa05e3-584d-4c81-bef8-b5224b93fba3\") " pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.581200 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.587639 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.707156 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") pod \"7875d554-e943-402f-b176-8644590e7926\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.707285 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") pod \"7875d554-e943-402f-b176-8644590e7926\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.707355 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") pod \"7875d554-e943-402f-b176-8644590e7926\" (UID: \"7875d554-e943-402f-b176-8644590e7926\") " Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.711913 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f" (OuterVolumeSpecName: "kube-api-access-zps2f") pod "7875d554-e943-402f-b176-8644590e7926" (UID: "7875d554-e943-402f-b176-8644590e7926"). InnerVolumeSpecName "kube-api-access-zps2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.736566 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7875d554-e943-402f-b176-8644590e7926" (UID: "7875d554-e943-402f-b176-8644590e7926"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.744946 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data" (OuterVolumeSpecName: "config-data") pod "7875d554-e943-402f-b176-8644590e7926" (UID: "7875d554-e943-402f-b176-8644590e7926"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.809609 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zps2f\" (UniqueName: \"kubernetes.io/projected/7875d554-e943-402f-b176-8644590e7926-kube-api-access-zps2f\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.809660 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.809673 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7875d554-e943-402f-b176-8644590e7926-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 14:04:42 crc kubenswrapper[4769]: I0122 14:04:42.900215 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7cab01-0731-4a76-a6d5-b6d0905b2386" path="/var/lib/kubelet/pods/5c7cab01-0731-4a76-a6d5-b6d0905b2386/volumes" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.052683 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.110553 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6fa05e3-584d-4c81-bef8-b5224b93fba3","Type":"ContainerStarted","Data":"f09c359b5df8f768dc10964c4ad03b6a9f9bc2c52bacd9fde09bd9eddfd45708"} Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112341 4769 generic.go:334] "Generic (PLEG): container finished" podID="7875d554-e943-402f-b176-8644590e7926" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" exitCode=0 Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerDied","Data":"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2"} Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112406 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7875d554-e943-402f-b176-8644590e7926","Type":"ContainerDied","Data":"572df80009e2badcb09d845c35585498e31a50e4449686f5a44d8ee1e3d26270"} Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112420 4769 scope.go:117] "RemoveContainer" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.112545 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.155097 4769 scope.go:117] "RemoveContainer" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" Jan 22 14:04:43 crc kubenswrapper[4769]: E0122 14:04:43.156935 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2\": container with ID starting with e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2 not found: ID does not exist" containerID="e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.156991 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2"} err="failed to get container status \"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2\": rpc error: code = NotFound desc = could not find container \"e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2\": container with ID starting with e0754791b973b6c6e50cd28d6e666820f0fab5aa1539d3354d44e545af3bf6d2 not found: ID does not exist" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.162950 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.172239 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.181739 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: E0122 14:04:43.182199 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.182222 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.182476 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7875d554-e943-402f-b176-8644590e7926" containerName="nova-scheduler-scheduler" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.184714 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.187472 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.216875 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.317936 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-config-data\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.318006 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4k4p\" (UniqueName: \"kubernetes.io/projected/169a141c-dd3f-4efa-9b61-bb8df13bcd49-kube-api-access-m4k4p\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.318190 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.420460 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.420668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-config-data\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.420751 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4k4p\" (UniqueName: \"kubernetes.io/projected/169a141c-dd3f-4efa-9b61-bb8df13bcd49-kube-api-access-m4k4p\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.424343 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-config-data\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.424360 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169a141c-dd3f-4efa-9b61-bb8df13bcd49-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.438967 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4k4p\" (UniqueName: \"kubernetes.io/projected/169a141c-dd3f-4efa-9b61-bb8df13bcd49-kube-api-access-m4k4p\") pod \"nova-scheduler-0\" (UID: \"169a141c-dd3f-4efa-9b61-bb8df13bcd49\") " pod="openstack/nova-scheduler-0" Jan 22 14:04:43 crc kubenswrapper[4769]: I0122 14:04:43.510036 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.048969 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 22 14:04:44 crc kubenswrapper[4769]: W0122 14:04:44.052198 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169a141c_dd3f_4efa_9b61_bb8df13bcd49.slice/crio-e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0 WatchSource:0}: Error finding container e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0: Status 404 returned error can't find the container with id e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0 Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.128836 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"169a141c-dd3f-4efa-9b61-bb8df13bcd49","Type":"ContainerStarted","Data":"e093c2b0dee48c664fb8988d804b59401d8f09a56e5e18d60ac79ad8fdda33e0"} Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.133065 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6fa05e3-584d-4c81-bef8-b5224b93fba3","Type":"ContainerStarted","Data":"9f1b725c403865900aba20ae4b6afc50bd6e84093c3bcf80cf680d36842cb58c"} Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.133113 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a6fa05e3-584d-4c81-bef8-b5224b93fba3","Type":"ContainerStarted","Data":"469949ef4013c921b84065e6d0391347e0e95af7d3fecd4ae7d8f79ba75e3ad5"} Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.153452 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.153433483 podStartE2EDuration="2.153433483s" podCreationTimestamp="2026-01-22 14:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:44.153417592 +0000 UTC m=+1263.564527531" watchObservedRunningTime="2026-01-22 14:04:44.153433483 +0000 UTC m=+1263.564543412" Jan 22 14:04:44 crc kubenswrapper[4769]: I0122 14:04:44.892890 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7875d554-e943-402f-b176-8644590e7926" path="/var/lib/kubelet/pods/7875d554-e943-402f-b176-8644590e7926/volumes" Jan 22 14:04:45 crc kubenswrapper[4769]: I0122 14:04:45.143824 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"169a141c-dd3f-4efa-9b61-bb8df13bcd49","Type":"ContainerStarted","Data":"5304c7146ba479e17e1db2d0c708f85c69b17235905053819dbf50e6aec78505"} Jan 22 14:04:45 crc kubenswrapper[4769]: I0122 14:04:45.172208 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.172185933 podStartE2EDuration="2.172185933s" podCreationTimestamp="2026-01-22 14:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:04:45.1669361 +0000 UTC m=+1264.578046029" watchObservedRunningTime="2026-01-22 14:04:45.172185933 +0000 UTC m=+1264.583295862" Jan 22 14:04:47 crc kubenswrapper[4769]: I0122 14:04:47.581963 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:47 crc kubenswrapper[4769]: I0122 14:04:47.582303 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 22 14:04:48 crc kubenswrapper[4769]: I0122 14:04:48.510157 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 22 14:04:49 crc kubenswrapper[4769]: I0122 14:04:49.424277 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:49 crc kubenswrapper[4769]: I0122 14:04:49.425188 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 22 14:04:50 crc kubenswrapper[4769]: I0122 14:04:50.436121 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b103e0f8-85be-424c-a705-112fb70500b6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:50 crc kubenswrapper[4769]: I0122 14:04:50.436141 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b103e0f8-85be-424c-a705-112fb70500b6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:52 crc kubenswrapper[4769]: I0122 14:04:52.581589 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:52 crc kubenswrapper[4769]: I0122 14:04:52.582055 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.511202 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.542031 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.596961 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a6fa05e3-584d-4c81-bef8-b5224b93fba3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:53 crc kubenswrapper[4769]: I0122 14:04:53.596971 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a6fa05e3-584d-4c81-bef8-b5224b93fba3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 22 14:04:54 crc kubenswrapper[4769]: I0122 14:04:54.256267 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 22 14:04:58 crc kubenswrapper[4769]: I0122 14:04:58.438924 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.432414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.432854 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.433732 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 22 14:04:59 crc kubenswrapper[4769]: I0122 14:04:59.438764 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:05:00 crc kubenswrapper[4769]: I0122 14:05:00.287294 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 22 14:05:00 crc kubenswrapper[4769]: I0122 14:05:00.294140 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.588067 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.589082 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.593812 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:05:02 crc kubenswrapper[4769]: I0122 14:05:02.596456 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 22 14:05:10 crc kubenswrapper[4769]: I0122 14:05:10.976867 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:11 crc kubenswrapper[4769]: I0122 14:05:11.872761 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:14 crc kubenswrapper[4769]: I0122 14:05:14.935876 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" containerID="cri-o://49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0" gracePeriod=604797 Jan 22 14:05:15 crc kubenswrapper[4769]: I0122 14:05:15.891717 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" containerID="cri-o://401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce" gracePeriod=604796 Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.466623 4769 generic.go:334] "Generic (PLEG): container finished" podID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerID="49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0" exitCode=0 Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.466691 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerDied","Data":"49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0"} Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.554502 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737637 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737753 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737807 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737836 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737864 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737887 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737948 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.737992 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.738039 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.738097 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.738216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") pod \"12de511c-514e-496c-9fbf-6d1e10db81fc\" (UID: \"12de511c-514e-496c-9fbf-6d1e10db81fc\") " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.739052 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.739170 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.739272 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.752708 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info" (OuterVolumeSpecName: "pod-info") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.753666 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc" (OuterVolumeSpecName: "kube-api-access-csgrc") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "kube-api-access-csgrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.754784 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.756123 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.765364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.780583 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data" (OuterVolumeSpecName: "config-data") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.797468 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf" (OuterVolumeSpecName: "server-conf") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840872 4769 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840910 4769 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/12de511c-514e-496c-9fbf-6d1e10db81fc-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840923 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840932 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840940 4769 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840948 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/12de511c-514e-496c-9fbf-6d1e10db81fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840956 4769 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/12de511c-514e-496c-9fbf-6d1e10db81fc-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840983 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.840994 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csgrc\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-kube-api-access-csgrc\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.841003 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.852433 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "12de511c-514e-496c-9fbf-6d1e10db81fc" (UID: "12de511c-514e-496c-9fbf-6d1e10db81fc"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.871687 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.943984 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:21 crc kubenswrapper[4769]: I0122 14:05:21.944481 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/12de511c-514e-496c-9fbf-6d1e10db81fc-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.065700 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.476769 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.476955 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"12de511c-514e-496c-9fbf-6d1e10db81fc","Type":"ContainerDied","Data":"6d72a769611a46bdb1768f4e9380f28bb2a07dc2061ec5bd95716855943febe1"} Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.477567 4769 scope.go:117] "RemoveContainer" containerID="49f4ea3ddc87a4f5bedaa873ef01966d747d665e05df782c166bb9cc4f6f7bd0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481145 4769 generic.go:334] "Generic (PLEG): container finished" podID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerID="401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce" exitCode=0 Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerDied","Data":"401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce"} Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481235 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7b5386c6-ecca-4882-b692-80c4f5a194e7","Type":"ContainerDied","Data":"ccc004cd79462493e89b2cd51c3ab3ddf01650baa9a183653d7b3f8461132890"} Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.481246 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccc004cd79462493e89b2cd51c3ab3ddf01650baa9a183653d7b3f8461132890" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.498901 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.522742 4769 scope.go:117] "RemoveContainer" containerID="02b31e2a239b0168026857e943798de5de7f95b04782c217474e99a5a431076d" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.528753 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.560088 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608364 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608859 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608880 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608900 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608907 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="setup-container" Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608929 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608935 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: E0122 14:05:22.608962 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.608968 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.609132 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.609148 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" containerName="rabbitmq" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.610326 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.611991 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zm2vm" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.612935 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.613070 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.613449 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.615022 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.615206 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.622576 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.660963 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661132 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661168 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661289 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661343 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661399 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661466 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661498 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661550 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.661601 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7b5386c6-ecca-4882-b692-80c4f5a194e7\" (UID: \"7b5386c6-ecca-4882-b692-80c4f5a194e7\") " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.665633 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.665695 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.669254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.670258 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info" (OuterVolumeSpecName: "pod-info") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.681042 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.681355 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.681969 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.685678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.693155 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s" (OuterVolumeSpecName: "kube-api-access-kqp6s") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "kube-api-access-kqp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.699556 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data" (OuterVolumeSpecName: "config-data") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.756150 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf" (OuterVolumeSpecName: "server-conf") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/962e2340-5ed3-4560-b61b-4675432bac01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763767 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763828 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763852 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763876 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz8xp\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-kube-api-access-lz8xp\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.763988 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764080 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-config-data\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/962e2340-5ed3-4560-b61b-4675432bac01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764181 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764200 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764211 4769 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b5386c6-ecca-4882-b692-80c4f5a194e7-pod-info\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764233 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764243 4769 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b5386c6-ecca-4882-b692-80c4f5a194e7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764254 4769 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-server-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764265 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764276 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqp6s\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-kube-api-access-kqp6s\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764286 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.764296 4769 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b5386c6-ecca-4882-b692-80c4f5a194e7-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.794506 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.795639 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7b5386c6-ecca-4882-b692-80c4f5a194e7" (UID: "7b5386c6-ecca-4882-b692-80c4f5a194e7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865590 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865640 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865664 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865687 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8xp\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-kube-api-access-lz8xp\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.865704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.866931 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.867104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.868205 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.868505 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869211 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869405 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-config-data\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869671 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/962e2340-5ed3-4560-b61b-4675432bac01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/962e2340-5ed3-4560-b61b-4675432bac01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869845 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b5386c6-ecca-4882-b692-80c4f5a194e7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869857 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.869950 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.870580 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/962e2340-5ed3-4560-b61b-4675432bac01-config-data\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.870712 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.872975 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/962e2340-5ed3-4560-b61b-4675432bac01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.873348 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.873406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/962e2340-5ed3-4560-b61b-4675432bac01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.883806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8xp\" (UniqueName: \"kubernetes.io/projected/962e2340-5ed3-4560-b61b-4675432bac01-kube-api-access-lz8xp\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.898071 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12de511c-514e-496c-9fbf-6d1e10db81fc" path="/var/lib/kubelet/pods/12de511c-514e-496c-9fbf-6d1e10db81fc/volumes" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.911586 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"962e2340-5ed3-4560-b61b-4675432bac01\") " pod="openstack/rabbitmq-server-0" Jan 22 14:05:22 crc kubenswrapper[4769]: I0122 14:05:22.931321 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.378810 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.493277 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.494640 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerStarted","Data":"f342f136d881af427f064d4b6f00d7a8af4922e009ad2acef9a4431fd2fce2a6"} Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.628075 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.638611 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.650838 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.652265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.654856 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.654962 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-5c97b" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.655068 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.655658 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.656852 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.656904 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.657021 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.680845 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.789617 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4kjs\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-kube-api-access-q4kjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790027 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790068 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790148 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790473 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790590 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790689 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.790726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893154 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4kjs\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-kube-api-access-q4kjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893227 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893317 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893426 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893469 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893538 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.893611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.894354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.894454 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.894864 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.895265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.895456 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.896235 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.901300 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.901652 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.910040 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.911609 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.913831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4kjs\" (UniqueName: \"kubernetes.io/projected/1fd40f71-8afc-45fa-8a93-e784fb5f63c8-kube-api-access-q4kjs\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.925774 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"1fd40f71-8afc-45fa-8a93-e784fb5f63c8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:23 crc kubenswrapper[4769]: I0122 14:05:23.971375 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:24 crc kubenswrapper[4769]: I0122 14:05:24.417476 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 22 14:05:24 crc kubenswrapper[4769]: W0122 14:05:24.515682 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd40f71_8afc_45fa_8a93_e784fb5f63c8.slice/crio-dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8 WatchSource:0}: Error finding container dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8: Status 404 returned error can't find the container with id dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8 Jan 22 14:05:24 crc kubenswrapper[4769]: I0122 14:05:24.900470 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5386c6-ecca-4882-b692-80c4f5a194e7" path="/var/lib/kubelet/pods/7b5386c6-ecca-4882-b692-80c4f5a194e7/volumes" Jan 22 14:05:25 crc kubenswrapper[4769]: I0122 14:05:25.521043 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerStarted","Data":"e72578ac8c9214570629443c31741f66617c0c80ddefde9c00cd86332e730626"} Jan 22 14:05:25 crc kubenswrapper[4769]: I0122 14:05:25.523459 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerStarted","Data":"dd0a193bef14b19d3d5efc83e5b72ab87d7937837ac46f023c416837513b40e8"} Jan 22 14:05:26 crc kubenswrapper[4769]: I0122 14:05:26.533841 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerStarted","Data":"35252555853ce340253c0eefa638373f8346698496121a40c846f916b330db36"} Jan 22 14:05:40 crc kubenswrapper[4769]: I0122 14:05:40.481901 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:05:40 crc kubenswrapper[4769]: I0122 14:05:40.482508 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:05:57 crc kubenswrapper[4769]: I0122 14:05:57.830328 4769 generic.go:334] "Generic (PLEG): container finished" podID="962e2340-5ed3-4560-b61b-4675432bac01" containerID="e72578ac8c9214570629443c31741f66617c0c80ddefde9c00cd86332e730626" exitCode=0 Jan 22 14:05:57 crc kubenswrapper[4769]: I0122 14:05:57.830396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerDied","Data":"e72578ac8c9214570629443c31741f66617c0c80ddefde9c00cd86332e730626"} Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.840415 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"962e2340-5ed3-4560-b61b-4675432bac01","Type":"ContainerStarted","Data":"af186a92290f9236c6290610ca7c9388b55bbbafd3dfe2171977115f0e5758f3"} Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.840928 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.842366 4769 generic.go:334] "Generic (PLEG): container finished" podID="1fd40f71-8afc-45fa-8a93-e784fb5f63c8" containerID="35252555853ce340253c0eefa638373f8346698496121a40c846f916b330db36" exitCode=0 Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.842408 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerDied","Data":"35252555853ce340253c0eefa638373f8346698496121a40c846f916b330db36"} Jan 22 14:05:58 crc kubenswrapper[4769]: I0122 14:05:58.867698 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.867679845 podStartE2EDuration="36.867679845s" podCreationTimestamp="2026-01-22 14:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:05:58.864047537 +0000 UTC m=+1338.275157466" watchObservedRunningTime="2026-01-22 14:05:58.867679845 +0000 UTC m=+1338.278789774" Jan 22 14:05:59 crc kubenswrapper[4769]: I0122 14:05:59.853555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1fd40f71-8afc-45fa-8a93-e784fb5f63c8","Type":"ContainerStarted","Data":"220729d2dc07aeae0f1cc83562efd9a4bb53bd0aa613024a1bdfce66661c2aef"} Jan 22 14:05:59 crc kubenswrapper[4769]: I0122 14:05:59.854242 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:05:59 crc kubenswrapper[4769]: I0122 14:05:59.876142 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.876126083 podStartE2EDuration="36.876126083s" podCreationTimestamp="2026-01-22 14:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 14:05:59.871917359 +0000 UTC m=+1339.283027298" watchObservedRunningTime="2026-01-22 14:05:59.876126083 +0000 UTC m=+1339.287236002" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.019569 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.021775 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.024896 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tbnjt"/"openshift-service-ca.crt" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.026739 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tbnjt"/"kube-root-ca.crt" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.026989 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tbnjt"/"default-dockercfg-klv5q" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.039137 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.109835 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.109955 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.211449 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.211513 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.211973 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.233434 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"must-gather-nlc24\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.339111 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.870442 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.874849 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerStarted","Data":"a7d09c897c4e58008d980c499629ff714b40edf727052df005ba245496e82e9c"} Jan 22 14:06:01 crc kubenswrapper[4769]: I0122 14:06:01.887455 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:06:09 crc kubenswrapper[4769]: I0122 14:06:09.966648 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerStarted","Data":"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b"} Jan 22 14:06:10 crc kubenswrapper[4769]: I0122 14:06:10.482166 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:06:10 crc kubenswrapper[4769]: I0122 14:06:10.482242 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:06:10 crc kubenswrapper[4769]: I0122 14:06:10.978277 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerStarted","Data":"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b"} Jan 22 14:06:11 crc kubenswrapper[4769]: I0122 14:06:10.999928 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tbnjt/must-gather-nlc24" podStartSLOduration=3.194665982 podStartE2EDuration="10.999907075s" podCreationTimestamp="2026-01-22 14:06:00 +0000 UTC" firstStartedPulling="2026-01-22 14:06:01.870066818 +0000 UTC m=+1341.281176747" lastFinishedPulling="2026-01-22 14:06:09.675307911 +0000 UTC m=+1349.086417840" observedRunningTime="2026-01-22 14:06:10.994461947 +0000 UTC m=+1350.405571886" watchObservedRunningTime="2026-01-22 14:06:10.999907075 +0000 UTC m=+1350.411017004" Jan 22 14:06:12 crc kubenswrapper[4769]: I0122 14:06:12.935004 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.550352 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-89q4b"] Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.552495 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.659261 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.659344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.761527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.761612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.762061 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.779894 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"crc-debug-89q4b\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.881907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:13 crc kubenswrapper[4769]: I0122 14:06:13.979052 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 22 14:06:14 crc kubenswrapper[4769]: I0122 14:06:14.019607 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" event={"ID":"c8467ba6-6bd4-4eaa-a313-94ad5c8db789","Type":"ContainerStarted","Data":"1cb2cf491bb9c9686a93c3b68612bdef492589f7f683dc9b3c9232ec1e232336"} Jan 22 14:06:26 crc kubenswrapper[4769]: I0122 14:06:26.160653 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" event={"ID":"c8467ba6-6bd4-4eaa-a313-94ad5c8db789","Type":"ContainerStarted","Data":"6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656"} Jan 22 14:06:26 crc kubenswrapper[4769]: I0122 14:06:26.186626 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" podStartSLOduration=1.536388214 podStartE2EDuration="13.186606284s" podCreationTimestamp="2026-01-22 14:06:13 +0000 UTC" firstStartedPulling="2026-01-22 14:06:13.93578464 +0000 UTC m=+1353.346894569" lastFinishedPulling="2026-01-22 14:06:25.58600271 +0000 UTC m=+1364.997112639" observedRunningTime="2026-01-22 14:06:26.177598401 +0000 UTC m=+1365.588708330" watchObservedRunningTime="2026-01-22 14:06:26.186606284 +0000 UTC m=+1365.597716213" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.482426 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.483109 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.483166 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.484029 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:06:40 crc kubenswrapper[4769]: I0122 14:06:40.484089 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f" gracePeriod=600 Jan 22 14:06:41 crc kubenswrapper[4769]: I0122 14:06:41.288423 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f" exitCode=0 Jan 22 14:06:41 crc kubenswrapper[4769]: I0122 14:06:41.288497 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f"} Jan 22 14:06:41 crc kubenswrapper[4769]: I0122 14:06:41.288913 4769 scope.go:117] "RemoveContainer" containerID="53e8fc2db9705c596d7460e51a2fbb034ceda2ed4d75e601aaaaedcba02d24aa" Jan 22 14:06:42 crc kubenswrapper[4769]: I0122 14:06:42.300931 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerStarted","Data":"e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135"} Jan 22 14:06:42 crc kubenswrapper[4769]: I0122 14:06:42.303754 4769 generic.go:334] "Generic (PLEG): container finished" podID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerID="6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656" exitCode=0 Jan 22 14:06:42 crc kubenswrapper[4769]: I0122 14:06:42.303820 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" event={"ID":"c8467ba6-6bd4-4eaa-a313-94ad5c8db789","Type":"ContainerDied","Data":"6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656"} Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.435963 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.478996 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-89q4b"] Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.488065 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-89q4b"] Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510253 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") pod \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510339 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") pod \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\" (UID: \"c8467ba6-6bd4-4eaa-a313-94ad5c8db789\") " Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510486 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host" (OuterVolumeSpecName: "host") pod "c8467ba6-6bd4-4eaa-a313-94ad5c8db789" (UID: "c8467ba6-6bd4-4eaa-a313-94ad5c8db789"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.510886 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-host\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.525131 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp" (OuterVolumeSpecName: "kube-api-access-wkxtp") pod "c8467ba6-6bd4-4eaa-a313-94ad5c8db789" (UID: "c8467ba6-6bd4-4eaa-a313-94ad5c8db789"). InnerVolumeSpecName "kube-api-access-wkxtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:06:43 crc kubenswrapper[4769]: I0122 14:06:43.612318 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkxtp\" (UniqueName: \"kubernetes.io/projected/c8467ba6-6bd4-4eaa-a313-94ad5c8db789-kube-api-access-wkxtp\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.334275 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cb2cf491bb9c9686a93c3b68612bdef492589f7f683dc9b3c9232ec1e232336" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.334339 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-89q4b" Jan 22 14:06:44 crc kubenswrapper[4769]: E0122 14:06:44.453942 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8467ba6_6bd4_4eaa_a313_94ad5c8db789.slice/crio-1cb2cf491bb9c9686a93c3b68612bdef492589f7f683dc9b3c9232ec1e232336\": RecentStats: unable to find data in memory cache]" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.653369 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-z66f5"] Jan 22 14:06:44 crc kubenswrapper[4769]: E0122 14:06:44.653723 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerName="container-00" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.653735 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerName="container-00" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.653930 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" containerName="container-00" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.654491 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.829397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.829524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.893344 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8467ba6-6bd4-4eaa-a313-94ad5c8db789" path="/var/lib/kubelet/pods/c8467ba6-6bd4-4eaa-a313-94ad5c8db789/volumes" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.931601 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.931715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.932000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.954364 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"crc-debug-z66f5\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:44 crc kubenswrapper[4769]: I0122 14:06:44.979775 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:45 crc kubenswrapper[4769]: W0122 14:06:45.025088 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd739567_06f9_45a6_b424_6ff02babf529.slice/crio-16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f WatchSource:0}: Error finding container 16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f: Status 404 returned error can't find the container with id 16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.348405 4769 generic.go:334] "Generic (PLEG): container finished" podID="bd739567-06f9-45a6-b424-6ff02babf529" containerID="11242a9a9c2d5e36764427e969ca476d75b5cdf241d3ec86f11fa1bb416dffb8" exitCode=1 Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.348452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" event={"ID":"bd739567-06f9-45a6-b424-6ff02babf529","Type":"ContainerDied","Data":"11242a9a9c2d5e36764427e969ca476d75b5cdf241d3ec86f11fa1bb416dffb8"} Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.348485 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" event={"ID":"bd739567-06f9-45a6-b424-6ff02babf529","Type":"ContainerStarted","Data":"16ed1b6298584cb6eee13f8c3c7bf972f210ed4b1803a6eda46f6a9bc1b72e1f"} Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.386650 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-z66f5"] Jan 22 14:06:45 crc kubenswrapper[4769]: I0122 14:06:45.395046 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tbnjt/crc-debug-z66f5"] Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.467988 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.571844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") pod \"bd739567-06f9-45a6-b424-6ff02babf529\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.571961 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") pod \"bd739567-06f9-45a6-b424-6ff02babf529\" (UID: \"bd739567-06f9-45a6-b424-6ff02babf529\") " Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.572109 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host" (OuterVolumeSpecName: "host") pod "bd739567-06f9-45a6-b424-6ff02babf529" (UID: "bd739567-06f9-45a6-b424-6ff02babf529"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.572923 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd739567-06f9-45a6-b424-6ff02babf529-host\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.577524 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj" (OuterVolumeSpecName: "kube-api-access-dxszj") pod "bd739567-06f9-45a6-b424-6ff02babf529" (UID: "bd739567-06f9-45a6-b424-6ff02babf529"). InnerVolumeSpecName "kube-api-access-dxszj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.675091 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxszj\" (UniqueName: \"kubernetes.io/projected/bd739567-06f9-45a6-b424-6ff02babf529-kube-api-access-dxszj\") on node \"crc\" DevicePath \"\"" Jan 22 14:06:46 crc kubenswrapper[4769]: I0122 14:06:46.894867 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd739567-06f9-45a6-b424-6ff02babf529" path="/var/lib/kubelet/pods/bd739567-06f9-45a6-b424-6ff02babf529/volumes" Jan 22 14:06:47 crc kubenswrapper[4769]: I0122 14:06:47.378516 4769 scope.go:117] "RemoveContainer" containerID="11242a9a9c2d5e36764427e969ca476d75b5cdf241d3ec86f11fa1bb416dffb8" Jan 22 14:06:47 crc kubenswrapper[4769]: I0122 14:06:47.378557 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/crc-debug-z66f5" Jan 22 14:06:58 crc kubenswrapper[4769]: I0122 14:06:58.988258 4769 scope.go:117] "RemoveContainer" containerID="9b9e64b997b26d114d51b0ae4c6e0266bbcb40beb8208c3fa5614f05a348bcc2" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.012582 4769 scope.go:117] "RemoveContainer" containerID="ae72a3cad378713d6148c709f4937c708ece4459bfb2c249eb2d7b58d0c80b04" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.035760 4769 scope.go:117] "RemoveContainer" containerID="787c971a0dea74b3f6ee351dd1bb60c21eb90e1fc50d951e6c355694f371ee32" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.090082 4769 scope.go:117] "RemoveContainer" containerID="401fb4362859b85fbcab13853d6edb403e6c11a9836d41d62c76e8de98656fce" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.134283 4769 scope.go:117] "RemoveContainer" containerID="1df5bb57a2b37a726deb06ee2a4311afcd91a86d912ad8365dad00a8584aad2b" Jan 22 14:06:59 crc kubenswrapper[4769]: I0122 14:06:59.158752 4769 scope.go:117] "RemoveContainer" containerID="cd37417a78b080b1ccc1b5edbe869aca8460373ef9a4d35cbfcb0a8060072f8f" Jan 22 14:07:15 crc kubenswrapper[4769]: I0122 14:07:15.758192 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-8bb3-account-create-update-x6jhs_ec90402f-c994-4710-b82f-5c8cc3f12fdf/mariadb-account-create-update/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.006799 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5765d95c66-48prv_95a5cf33-efc2-4ca4-93cf-c397436588cb/barbican-api/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.148492 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5765d95c66-48prv_95a5cf33-efc2-4ca4-93cf-c397436588cb/barbican-api-log/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.192778 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-create-5nx2t_3d72603e-a10a-4490-8298-67db64d087fc/mariadb-database-create/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.360004 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-db-sync-zzjpd_a7f766e1-262c-4861-a117-2454631e284f/barbican-db-sync/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.387179 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fffc955cd-tlfq2_1ced7731-706e-49ab-8e05-af9f7dc7465a/barbican-keystone-listener/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.478239 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fffc955cd-tlfq2_1ced7731-706e-49ab-8e05-af9f7dc7465a/barbican-keystone-listener-log/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.627986 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79fdf5695-77th5_2d271baa-4d4e-42f2-87ec-a0c8a7314560/barbican-worker-log/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.629212 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-79fdf5695-77th5_2d271baa-4d4e-42f2-87ec-a0c8a7314560/barbican-worker/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.798514 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/ceilometer-central-agent/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.848060 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/ceilometer-notification-agent/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.890013 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/proxy-httpd/0.log" Jan 22 14:07:16 crc kubenswrapper[4769]: I0122 14:07:16.981645 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d9fe083b-8f17-4c51-87ff-a8a7f447190d/sg-core/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.046498 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-8372-account-create-update-lq4fn_51e2f7fd-cd2e-4a84-b62a-27915d32469c/mariadb-account-create-update/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.188116 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66670ed-ef72-4a45-be6e-add4b5f52f94/cinder-api/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.263781 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f66670ed-ef72-4a45-be6e-add4b5f52f94/cinder-api-log/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.325499 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-create-7r9tp_ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0/mariadb-database-create/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.466720 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-db-sync-l4hnw_3eb8819f-512d-43d8-a59e-1ba8e7e1fb06/cinder-db-sync/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.600212 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4552f275-d56c-4f3d-a8fd-7e5c4e2da02e/cinder-scheduler/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.629995 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4552f275-d56c-4f3d-a8fd-7e5c4e2da02e/probe/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.797631 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-n9fh2_6862cbe8-3411-44fc-a4a8-429c3551f695/init/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.943208 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-n9fh2_6862cbe8-3411-44fc-a4a8-429c3551f695/init/0.log" Jan 22 14:07:17 crc kubenswrapper[4769]: I0122 14:07:17.946681 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59cf4bdb65-n9fh2_6862cbe8-3411-44fc-a4a8-429c3551f695/dnsmasq-dns/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.018308 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-b906-account-create-update-rndmt_73fd3df5-6e83-4893-9368-66c1ba35155a/mariadb-account-create-update/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.147847 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-create-dxwjl_b909a789-674d-40ba-b332-700e27464966/mariadb-database-create/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.219236 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-db-sync-t9sxw_b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299/glance-db-sync/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.369192 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6e1405ea-42cd-4345-b44a-8e72350a3357/glance-httpd/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.411811 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_6e1405ea-42cd-4345-b44a-8e72350a3357/glance-log/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.568986 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_adf621f0-a198-4042-93a3-791ed71e1ee3/glance-log/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.596683 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_adf621f0-a198-4042-93a3-791ed71e1ee3/glance-httpd/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.742188 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cc4c8d8bd-69kmb_9a6a04bb-fa49-41f8-b75b-9c27873f8a1f/horizon/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.786495 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7cc4c8d8bd-69kmb_9a6a04bb-fa49-41f8-b75b-9c27873f8a1f/horizon-log/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.834352 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-0c5f-account-create-update-dbzd4_bced8c79-d4b4-42dc-ba19-a4ba1eeb4387/mariadb-account-create-update/0.log" Jan 22 14:07:18 crc kubenswrapper[4769]: I0122 14:07:18.980016 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bootstrap-nv6tp_4b938618-acdf-4f5f-8a04-daabc17cbb0c/keystone-bootstrap/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.108314 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-d8d684bc6-pmxwh_ddb12191-d02d-4e79-82cd-d164ecaf2093/keystone-api/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.177173 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-create-mw8m7_8e5e1134-cb08-4676-b40b-5e05af038ec7/mariadb-database-create/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.294286 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-db-sync-r7c9w_275c0c66-cbd1-4469-81f6-c33a1eab0ed6/keystone-db-sync/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.542119 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_27867d6f-28eb-45b6-afd4-9ad9da5a5a0f/kube-state-metrics/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.727645 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-24cb-account-create-update-rtdf4_cb68cb3e-c079-4e87-ae9b-be93a2b8b80e/mariadb-account-create-update/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.905623 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d6bcd56b9-2hx4m_a582ad75-7aa2-4ee6-9631-6726b7db9650/neutron-api/0.log" Jan 22 14:07:19 crc kubenswrapper[4769]: I0122 14:07:19.972681 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d6bcd56b9-2hx4m_a582ad75-7aa2-4ee6-9631-6726b7db9650/neutron-httpd/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.125662 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-create-892lk_ad0702a4-ee8a-45da-9cb7-40c2e4b257b9/mariadb-database-create/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.224381 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-db-sync-rqjpw_f7c0ef06-5806-418c-8a10-81ea6afb0401/neutron-db-sync/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.465254 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b103e0f8-85be-424c-a705-112fb70500b6/nova-api-api/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.503260 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b103e0f8-85be-424c-a705-112fb70500b6/nova-api-log/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.510879 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-264d-account-create-update-4z8cb_fe68065a-9702-4440-a09a-2698d21ad5cc/mariadb-account-create-update/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.680480 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-db-create-tx7mp_288566dc-b78e-46e4-9bd3-c61bc9c2a6ce/mariadb-database-create/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.744284 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-49d8-account-create-update-gnbhc_b33b7a35-52b8-47c6-b5a7-5cf87d838927/mariadb-account-create-update/0.log" Jan 22 14:07:20 crc kubenswrapper[4769]: I0122 14:07:20.965145 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-cell-mapping-6vgx7_3137766d-8b45-47a0-a7ca-f1a3c381450d/nova-manage/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.156363 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-db-sync-hql94_4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf/nova-cell0-conductor-db-sync/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.165408 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_66c7ff68-1167-4dbe-8e53-40f378941703/nova-cell0-conductor-conductor/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.390389 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-db-create-5t26t_e45f7c9a-23a2-40fe-80dc-305f1fbc8e17/mariadb-database-create/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.405767 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-cell-mapping-5j7zn_4b01ed3a-6c71-4384-80a2-59814d125061/nova-manage/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.724001 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-db-sync-cg5m6_60fa7062-c4e9-4700-88e1-af5262989c6f/nova-cell1-conductor-db-sync/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.728445 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e291c368-66b3-42b3-ad52-e3cd93471116/nova-cell1-conductor-conductor/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.911275 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-db-create-fllmn_ecb8a996-384c-4155-b45d-6a6335165545/mariadb-database-create/0.log" Jan 22 14:07:21 crc kubenswrapper[4769]: I0122 14:07:21.978491 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-ddb8-account-create-update-zm48k_cdcc2db5-9739-4e49-a6cc-3f7aff70f97d/mariadb-account-create-update/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.169961 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5697f97b-b5e1-4e54-aebb-540e12b7953c/nova-cell1-novncproxy-novncproxy/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.374264 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a6fa05e3-584d-4c81-bef8-b5224b93fba3/nova-metadata-log/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.381584 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_a6fa05e3-584d-4c81-bef8-b5224b93fba3/nova-metadata-metadata/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.538163 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_169a141c-dd3f-4efa-9b61-bb8df13bcd49/nova-scheduler-scheduler/0.log" Jan 22 14:07:22 crc kubenswrapper[4769]: I0122 14:07:22.596941 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_048fbe43-0fef-46e8-bc9d-038c96a4696c/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.022168 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_048fbe43-0fef-46e8-bc9d-038c96a4696c/galera/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.052027 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5478968-e798-44de-b3ed-632864fc0607/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.069999 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_048fbe43-0fef-46e8-bc9d-038c96a4696c/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.242978 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5478968-e798-44de-b3ed-632864fc0607/mysql-bootstrap/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.256341 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5478968-e798-44de-b3ed-632864fc0607/galera/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.305056 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a46459a9-7fab-439c-95fe-5d6cdcb16997/openstackclient/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.476587 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ljbrk_db7ce269-d7ec-4db1-aab3-b22da5d56c6e/ovn-controller/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.563715 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2ndkt_cbba9b5e-2f1d-4a3a-930e-c835070aefe9/openstack-network-exporter/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.710232 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovsdb-server-init/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.904599 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovsdb-server-init/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.935542 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovs-vswitchd/0.log" Jan 22 14:07:23 crc kubenswrapper[4769]: I0122 14:07:23.996048 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-57w6l_2f6b8be2-7370-47ca-843b-1dea67d837c3/ovsdb-server/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.157764 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_32d5b8f0-b7c1-4eeb-9b49-85b0240d28df/openstack-network-exporter/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.202721 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_32d5b8f0-b7c1-4eeb-9b49-85b0240d28df/ovn-northd/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.235315 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_760402cd-68ff-4d2e-a1ba-c54132e75c13/openstack-network-exporter/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.393527 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_760402cd-68ff-4d2e-a1ba-c54132e75c13/ovsdbserver-nb/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.500783 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a4e51d1-8dea-4f12-b7e9-7888f5672711/openstack-network-exporter/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.531179 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1a4e51d1-8dea-4f12-b7e9-7888f5672711/ovsdbserver-sb/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.686968 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8cb8655d-vl7kp_8d4588b0-8c00-47bf-8b6d-cab4a5d792ab/placement-api/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.788946 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b8cb8655d-vl7kp_8d4588b0-8c00-47bf-8b6d-cab4a5d792ab/placement-log/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.832298 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-a329-account-create-update-5dtjs_46ca4e3b-a376-4f54-88c0-75d4a912d489/mariadb-account-create-update/0.log" Jan 22 14:07:24 crc kubenswrapper[4769]: I0122 14:07:24.999219 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-create-7q976_257149e5-e0f3-4721-9329-6c119ce91192/mariadb-database-create/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.062065 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-db-sync-bjdj8_a0e92228-1a9b-49fc-9dfd-0493f70f5ee8/placement-db-sync/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.227138 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd40f71-8afc-45fa-8a93-e784fb5f63c8/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.441328 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd40f71-8afc-45fa-8a93-e784fb5f63c8/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.487627 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_962e2340-5ed3-4560-b61b-4675432bac01/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.552360 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1fd40f71-8afc-45fa-8a93-e784fb5f63c8/rabbitmq/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.681835 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_962e2340-5ed3-4560-b61b-4675432bac01/setup-container/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.685455 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_962e2340-5ed3-4560-b61b-4675432bac01/rabbitmq/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.771526 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_root-account-create-update-trlj5_4521e7ce-1245-4a18-9179-83a2b288e227/mariadb-account-create-update/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.955782 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-576cb8587-7cl26_75afafe2-c784-45fa-8104-1115c8921138/proxy-server/0.log" Jan 22 14:07:25 crc kubenswrapper[4769]: I0122 14:07:25.966139 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-576cb8587-7cl26_75afafe2-c784-45fa-8104-1115c8921138/proxy-httpd/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.152931 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jmhxf_f13b9a7b-6f5e-48fd-8d95-3beb851e9819/swift-ring-rebalance/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.225573 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-auditor/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.259500 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-reaper/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.418339 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-replicator/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.609842 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-auditor/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.609890 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/account-server/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.720386 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-replicator/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.800771 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-updater/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.817670 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/container-server/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.931715 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-auditor/0.log" Jan 22 14:07:26 crc kubenswrapper[4769]: I0122 14:07:26.997591 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-expirer/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.037438 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-server/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.057477 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-replicator/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.144477 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/object-updater/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.171468 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/rsync/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.282626 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ce65dba3-22b9-482f-b3da-2f4705468ea4/swift-recon-cron/0.log" Jan 22 14:07:27 crc kubenswrapper[4769]: I0122 14:07:27.299562 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3aa5525a-0eb2-487f-8721-3ef58f5df4aa/memcached/0.log" Jan 22 14:07:49 crc kubenswrapper[4769]: I0122 14:07:49.972862 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-54q5q_141f0476-23eb-4a43-a4ac-4d33c12bfb5b/manager/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.133486 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/util/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.334604 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/util/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.349804 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/pull/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.356557 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/pull/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.686685 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/pull/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.764118 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/util/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.800377 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c02f5c6e90e220c3e85f478d75741465573a92143592780ee4258ca577lthv9_7585045d-5962-4b7d-903e-97f301a8fd47/extract/0.log" Jan 22 14:07:50 crc kubenswrapper[4769]: I0122 14:07:50.925507 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-2q2v2_bc0b4b03-ee7e-44ed-9c1f-f481ae1a3049/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.062945 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-rlcb9_c6b325d8-50c6-411a-bc7f-938b284f0efb/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.195035 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-wvxp8_ae11ee9d-5ccf-490d-b457-294820d6a337/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.279444 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-brq9d_d40b03ae-0991-4364-85f3-89cf5e8d5686/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.423223 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-8rxgq_7d908338-dcdc-4423-b719-02d30f3834ed/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.687776 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-5njtw_c367fcfb-38d9-4834-970d-7004d16c8249/manager/0.log" Jan 22 14:07:51 crc kubenswrapper[4769]: I0122 14:07:51.818825 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-zt4sd_13c33fdb-b388-4fdf-996c-544286f47a73/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.029268 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-f2klg_d8d08194-af60-4614-b425-1b45340cd73b/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.182705 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-ttb7f_3d8a97d6-e3bd-49e0-bc78-024286cce303/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.266001 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-w77v6_a32a1e6f-004c-4675-abed-10078b43492a/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.381158 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-x8dvt_ebd5834b-ef11-40bb-9d15-6878767e7bef/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.524893 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-mwhh9_80a16478-da8a-4d2f-89df-163fada49abe/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.581774 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-fzz6p_8217a619-751c-4d07-a96c-ce3208f08e84/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.735065 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8542tcht_2b0a07de-4458-4970-a304-a608625bdebf/manager/0.log" Jan 22 14:07:52 crc kubenswrapper[4769]: I0122 14:07:52.915132 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-f94887bb5-8mc8h_a48b50b3-ad51-4268-a926-bf2f1d7fd3f6/operator/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.180256 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-m6xzn_a2d7498a-59be-42c8-913e-d8c8c596828f/registry-server/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.485266 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-prfwv_11299941-70c0-41a8-ad9c-5c4648c3aa95/manager/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.554637 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-ctf5z_f13c0d19-4c14-4897-bbc5-5c220d207e41/manager/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.730673 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54d678f547-4dd5j_a2bbc43c-9feb-4287-9e35-6f100c6644f6/manager/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.743922 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-hv48h_14005034-1ce8-4d62-afbc-66cd1d0d9be1/operator/0.log" Jan 22 14:07:53 crc kubenswrapper[4769]: I0122 14:07:53.945865 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-jbtsm_d931ff7f-f554-4249-bc34-2cd09fc97427/manager/0.log" Jan 22 14:07:54 crc kubenswrapper[4769]: I0122 14:07:54.061849 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-gwzt2_3c6369d9-2ecf-4187-bb10-76bde13ecd5d/manager/0.log" Jan 22 14:07:54 crc kubenswrapper[4769]: I0122 14:07:54.331650 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-pkl6g_ed1198a5-a7fa-4ab4-9656-8e9700deec37/manager/0.log" Jan 22 14:07:54 crc kubenswrapper[4769]: I0122 14:07:54.372909 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5ffb9c6597-b2w8p_31021ae3-dbb7-4ceb-8737-31052d849f0a/manager/0.log" Jan 22 14:07:59 crc kubenswrapper[4769]: I0122 14:07:59.289678 4769 scope.go:117] "RemoveContainer" containerID="df266f1e50e71fe12d82262c0a9066d4bf0ba22b1f00a59909f486af0c226b44" Jan 22 14:08:12 crc kubenswrapper[4769]: I0122 14:08:12.204730 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pzj8w_db7a69ec-2a82-4f9b-b83a-42237a02087e/control-plane-machine-set-operator/0.log" Jan 22 14:08:12 crc kubenswrapper[4769]: I0122 14:08:12.367938 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-65brj_f4e58a9e-ecc8-43de-9518-0b014b2a27d2/kube-rbac-proxy/0.log" Jan 22 14:08:12 crc kubenswrapper[4769]: I0122 14:08:12.398966 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-65brj_f4e58a9e-ecc8-43de-9518-0b014b2a27d2/machine-api-operator/0.log" Jan 22 14:08:24 crc kubenswrapper[4769]: I0122 14:08:24.585338 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-vn9qf_0390ceac-8902-475a-b739-ddc13392f828/cert-manager-controller/0.log" Jan 22 14:08:24 crc kubenswrapper[4769]: I0122 14:08:24.768208 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-dzj2v_e3a1ec89-c852-4274-b95b-c070b9cf8c20/cert-manager-webhook/0.log" Jan 22 14:08:24 crc kubenswrapper[4769]: I0122 14:08:24.772963 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-ptnxb_2bdf39e4-511e-4d06-a19a-7aa0cda68e94/cert-manager-cainjector/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.522783 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-t9pnx_bd1eaf1c-9da8-4372-888f-ed8464d4313d/nmstate-console-plugin/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.722216 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-v6r9x_7e7ab7e8-7c34-4b26-9c19-33ae90a756ec/nmstate-handler/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.768939 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xsnfh_fd9c945e-a392-4a96-8a06-893a09e8dc19/kube-rbac-proxy/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.841903 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-xsnfh_fd9c945e-a392-4a96-8a06-893a09e8dc19/nmstate-metrics/0.log" Jan 22 14:08:37 crc kubenswrapper[4769]: I0122 14:08:37.929923 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-z29kl_9342ab94-785a-427b-84d2-5ac6ff709531/nmstate-operator/0.log" Jan 22 14:08:38 crc kubenswrapper[4769]: I0122 14:08:38.100201 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-64j27_880459e4-297b-408b-8205-c2197bf19c18/nmstate-webhook/0.log" Jan 22 14:08:59 crc kubenswrapper[4769]: I0122 14:08:59.385689 4769 scope.go:117] "RemoveContainer" containerID="8cddcdbb8911a19c3b16e342ad30ed08a0f42dc1a1d70ee5aaed962fdb512de3" Jan 22 14:08:59 crc kubenswrapper[4769]: I0122 14:08:59.421308 4769 scope.go:117] "RemoveContainer" containerID="fe451f9d4d036e3a9401a1c3a26fc5a0b7d0eb48182d28ec094d84c5d2642db8" Jan 22 14:09:05 crc kubenswrapper[4769]: I0122 14:09:05.744011 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-qkpds_8fbbec23-1005-4364-bf82-8a646a24801a/kube-rbac-proxy/0.log" Jan 22 14:09:05 crc kubenswrapper[4769]: I0122 14:09:05.858941 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-qkpds_8fbbec23-1005-4364-bf82-8a646a24801a/controller/0.log" Jan 22 14:09:05 crc kubenswrapper[4769]: I0122 14:09:05.953974 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.161533 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.161879 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.188659 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.194397 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.381091 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.385059 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.416621 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.427116 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.615578 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-frr-files/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.635669 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/controller/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.640003 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-reloader/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.648368 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/cp-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.822544 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/frr-metrics/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.878584 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/kube-rbac-proxy-frr/0.log" Jan 22 14:09:06 crc kubenswrapper[4769]: I0122 14:09:06.879667 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/kube-rbac-proxy/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.033002 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/reloader/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.082947 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-9n85j_82c00d20-0e87-4f34-9cae-d454867c62a0/frr-k8s-webhook-server/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.265304 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-ddb77dbc9-z2nv4_0e40742e-231f-4f7b-aa4b-fb58332c3dbe/manager/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.486699 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b46c7846-xbsl9_5ee84f81-0260-4579-b602-c37bcf5cc7aa/webhook-server/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.556135 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5vm9t_877a13a0-eef8-4409-b421-e3a8c23abc8a/frr/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.568181 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lwzgw_4762d945-0720-43a9-8af2-0317ce89dda2/kube-rbac-proxy/0.log" Jan 22 14:09:07 crc kubenswrapper[4769]: I0122 14:09:07.928704 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lwzgw_4762d945-0720-43a9-8af2-0317ce89dda2/speaker/0.log" Jan 22 14:09:10 crc kubenswrapper[4769]: I0122 14:09:10.481701 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:09:10 crc kubenswrapper[4769]: I0122 14:09:10.482135 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.500767 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/util/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.618838 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/util/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.642119 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/pull/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.697042 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/pull/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.892538 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/pull/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.895687 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/util/0.log" Jan 22 14:09:20 crc kubenswrapper[4769]: I0122 14:09:20.909877 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc77w5v_2bd12d13-4630-4e58-95dd-7e6b2bb89428/extract/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.067560 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/util/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.222402 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/util/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.247476 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/pull/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.260407 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/pull/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.412424 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/pull/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.418544 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/util/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.465135 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71342pvx_38dd0c5f-6afb-4730-8900-e3e8b33f282a/extract/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.594095 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-utilities/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.788323 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-utilities/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.804749 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-content/0.log" Jan 22 14:09:21 crc kubenswrapper[4769]: I0122 14:09:21.831476 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.043013 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.060304 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.159493 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8vlvj_6bbcc4b3-c280-4093-9419-7d94204256fe/registry-server/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.230922 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.415973 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.443001 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.462784 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.649747 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-content/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.686893 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.937625 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-utilities/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.973605 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8nrlf_5b9b79f2-127c-4533-a170-8cb16e845c18/registry-server/0.log" Jan 22 14:09:22 crc kubenswrapper[4769]: I0122 14:09:22.993205 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7vfmb_1cfacd8e-cbec-4f68-b90c-ede3a679e454/marketplace-operator/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.165558 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-content/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.178317 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-content/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.237440 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-utilities/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.255860 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:23 crc kubenswrapper[4769]: E0122 14:09:23.256230 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd739567-06f9-45a6-b424-6ff02babf529" containerName="container-00" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.256247 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd739567-06f9-45a6-b424-6ff02babf529" containerName="container-00" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.256415 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd739567-06f9-45a6-b424-6ff02babf529" containerName="container-00" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.257721 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.271433 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.290273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.290547 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.290611 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.391941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392003 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392063 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392669 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.392718 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.415575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"redhat-marketplace-92j5p\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.551082 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-utilities/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.555586 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/extract-content/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.572771 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-twpxx_d88e1938-2f4c-43c7-9af2-98fb7222cee2/registry-server/0.log" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.586692 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:23 crc kubenswrapper[4769]: I0122 14:09:23.956536 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-utilities/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.123556 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-utilities/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.140915 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-content/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.151891 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-content/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.207500 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.488072 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-content/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.543742 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/extract-utilities/0.log" Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.783627 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d134a86-4a31-4784-b202-723a7c7f7249" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" exitCode=0 Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.783670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d"} Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.783696 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerStarted","Data":"47a39392968bf77600e9b667b4562d27c8835d5c1d21bb61afc9f69211982fac"} Jan 22 14:09:24 crc kubenswrapper[4769]: I0122 14:09:24.800963 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-dtrsx_c5db9abf-deb2-494a-b618-7180fbf1e53e/registry-server/0.log" Jan 22 14:09:25 crc kubenswrapper[4769]: I0122 14:09:25.794106 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d134a86-4a31-4784-b202-723a7c7f7249" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" exitCode=0 Jan 22 14:09:25 crc kubenswrapper[4769]: I0122 14:09:25.794309 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb"} Jan 22 14:09:26 crc kubenswrapper[4769]: I0122 14:09:26.815639 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerStarted","Data":"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c"} Jan 22 14:09:26 crc kubenswrapper[4769]: I0122 14:09:26.842529 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92j5p" podStartSLOduration=2.443930121 podStartE2EDuration="3.842475392s" podCreationTimestamp="2026-01-22 14:09:23 +0000 UTC" firstStartedPulling="2026-01-22 14:09:24.785865277 +0000 UTC m=+1544.196975206" lastFinishedPulling="2026-01-22 14:09:26.184410548 +0000 UTC m=+1545.595520477" observedRunningTime="2026-01-22 14:09:26.833961951 +0000 UTC m=+1546.245071890" watchObservedRunningTime="2026-01-22 14:09:26.842475392 +0000 UTC m=+1546.253585321" Jan 22 14:09:34 crc kubenswrapper[4769]: I0122 14:09:34.092301 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:34 crc kubenswrapper[4769]: I0122 14:09:34.093685 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:34 crc kubenswrapper[4769]: I0122 14:09:34.187936 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:35 crc kubenswrapper[4769]: I0122 14:09:35.151810 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:35 crc kubenswrapper[4769]: I0122 14:09:35.199881 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.116955 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-92j5p" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" containerName="registry-server" containerID="cri-o://cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" gracePeriod=2 Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.609465 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.721252 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") pod \"8d134a86-4a31-4784-b202-723a7c7f7249\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.721617 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") pod \"8d134a86-4a31-4784-b202-723a7c7f7249\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.721736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") pod \"8d134a86-4a31-4784-b202-723a7c7f7249\" (UID: \"8d134a86-4a31-4784-b202-723a7c7f7249\") " Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.722655 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities" (OuterVolumeSpecName: "utilities") pod "8d134a86-4a31-4784-b202-723a7c7f7249" (UID: "8d134a86-4a31-4784-b202-723a7c7f7249"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.737497 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w" (OuterVolumeSpecName: "kube-api-access-7846w") pod "8d134a86-4a31-4784-b202-723a7c7f7249" (UID: "8d134a86-4a31-4784-b202-723a7c7f7249"). InnerVolumeSpecName "kube-api-access-7846w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.747153 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d134a86-4a31-4784-b202-723a7c7f7249" (UID: "8d134a86-4a31-4784-b202-723a7c7f7249"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.824114 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.824406 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7846w\" (UniqueName: \"kubernetes.io/projected/8d134a86-4a31-4784-b202-723a7c7f7249-kube-api-access-7846w\") on node \"crc\" DevicePath \"\"" Jan 22 14:09:37 crc kubenswrapper[4769]: I0122 14:09:37.824483 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d134a86-4a31-4784-b202-723a7c7f7249-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.126968 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d134a86-4a31-4784-b202-723a7c7f7249" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" exitCode=0 Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127038 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92j5p" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127061 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c"} Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127108 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92j5p" event={"ID":"8d134a86-4a31-4784-b202-723a7c7f7249","Type":"ContainerDied","Data":"47a39392968bf77600e9b667b4562d27c8835d5c1d21bb61afc9f69211982fac"} Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.127141 4769 scope.go:117] "RemoveContainer" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.152821 4769 scope.go:117] "RemoveContainer" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.170394 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.182829 4769 scope.go:117] "RemoveContainer" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.187235 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-92j5p"] Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.228924 4769 scope.go:117] "RemoveContainer" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" Jan 22 14:09:38 crc kubenswrapper[4769]: E0122 14:09:38.229532 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c\": container with ID starting with cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c not found: ID does not exist" containerID="cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.229575 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c"} err="failed to get container status \"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c\": rpc error: code = NotFound desc = could not find container \"cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c\": container with ID starting with cac43fa2539ccf7ae13943d94601d9b2376b357ee37f3132af50904dc13de97c not found: ID does not exist" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.229599 4769 scope.go:117] "RemoveContainer" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" Jan 22 14:09:38 crc kubenswrapper[4769]: E0122 14:09:38.230058 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb\": container with ID starting with bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb not found: ID does not exist" containerID="bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.230080 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb"} err="failed to get container status \"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb\": rpc error: code = NotFound desc = could not find container \"bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb\": container with ID starting with bfd583b98c603e3ab895e84401c3449d207098ea05d0b41e61e813796e22bdfb not found: ID does not exist" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.230097 4769 scope.go:117] "RemoveContainer" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" Jan 22 14:09:38 crc kubenswrapper[4769]: E0122 14:09:38.232074 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d\": container with ID starting with 11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d not found: ID does not exist" containerID="11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.232110 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d"} err="failed to get container status \"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d\": rpc error: code = NotFound desc = could not find container \"11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d\": container with ID starting with 11ff6403c6dd7be206723f69857fd543b566de1695bf10db1899a84b1e899b8d not found: ID does not exist" Jan 22 14:09:38 crc kubenswrapper[4769]: I0122 14:09:38.898533 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d134a86-4a31-4784-b202-723a7c7f7249" path="/var/lib/kubelet/pods/8d134a86-4a31-4784-b202-723a7c7f7249/volumes" Jan 22 14:09:40 crc kubenswrapper[4769]: I0122 14:09:40.481667 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:09:40 crc kubenswrapper[4769]: I0122 14:09:40.482246 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:09:46 crc kubenswrapper[4769]: E0122 14:09:46.308336 4769 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.50:33536->38.102.83.50:45103: write tcp 38.102.83.50:33536->38.102.83.50:45103: write: broken pipe Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.482071 4769 patch_prober.go:28] interesting pod/machine-config-daemon-hwhw7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.482582 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.482659 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.483517 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135"} pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 14:10:10 crc kubenswrapper[4769]: I0122 14:10:10.483588 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerName="machine-config-daemon" containerID="cri-o://e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" gracePeriod=600 Jan 22 14:10:10 crc kubenswrapper[4769]: E0122 14:10:10.611625 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.438411 4769 generic.go:334] "Generic (PLEG): container finished" podID="f0af8746-c9f0-48e6-8a60-02fed286b419" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" exitCode=0 Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.438762 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" event={"ID":"f0af8746-c9f0-48e6-8a60-02fed286b419","Type":"ContainerDied","Data":"e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135"} Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.438825 4769 scope.go:117] "RemoveContainer" containerID="b11c852b1916b3e6aabc4731560f2f295531ff82773fd1f45e29d26517b1467f" Jan 22 14:10:11 crc kubenswrapper[4769]: I0122 14:10:11.439541 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:11 crc kubenswrapper[4769]: E0122 14:10:11.439846 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:22 crc kubenswrapper[4769]: I0122 14:10:22.893925 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:22 crc kubenswrapper[4769]: E0122 14:10:22.895321 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:37 crc kubenswrapper[4769]: I0122 14:10:37.883900 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:37 crc kubenswrapper[4769]: E0122 14:10:37.885038 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.048955 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.057855 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.066497 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.074102 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.081948 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a329-account-create-update-5dtjs"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.089421 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7q976"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.096555 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0c5f-account-create-update-dbzd4"] Jan 22 14:10:39 crc kubenswrapper[4769]: I0122 14:10:39.104277 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mw8m7"] Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.900652 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257149e5-e0f3-4721-9329-6c119ce91192" path="/var/lib/kubelet/pods/257149e5-e0f3-4721-9329-6c119ce91192/volumes" Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.901679 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ca4e3b-a376-4f54-88c0-75d4a912d489" path="/var/lib/kubelet/pods/46ca4e3b-a376-4f54-88c0-75d4a912d489/volumes" Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.902307 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e5e1134-cb08-4676-b40b-5e05af038ec7" path="/var/lib/kubelet/pods/8e5e1134-cb08-4676-b40b-5e05af038ec7/volumes" Jan 22 14:10:40 crc kubenswrapper[4769]: I0122 14:10:40.902923 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bced8c79-d4b4-42dc-ba19-a4ba1eeb4387" path="/var/lib/kubelet/pods/bced8c79-d4b4-42dc-ba19-a4ba1eeb4387/volumes" Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.042719 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.050961 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.059272 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b906-account-create-update-rndmt"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.066536 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dxwjl"] Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.902599 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73fd3df5-6e83-4893-9368-66c1ba35155a" path="/var/lib/kubelet/pods/73fd3df5-6e83-4893-9368-66c1ba35155a/volumes" Jan 22 14:10:44 crc kubenswrapper[4769]: I0122 14:10:44.905662 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b909a789-674d-40ba-b332-700e27464966" path="/var/lib/kubelet/pods/b909a789-674d-40ba-b332-700e27464966/volumes" Jan 22 14:10:50 crc kubenswrapper[4769]: I0122 14:10:50.891174 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:10:50 crc kubenswrapper[4769]: E0122 14:10:50.892105 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:10:56 crc kubenswrapper[4769]: I0122 14:10:56.932365 4769 generic.go:334] "Generic (PLEG): container finished" podID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" exitCode=0 Jan 22 14:10:56 crc kubenswrapper[4769]: I0122 14:10:56.932407 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tbnjt/must-gather-nlc24" event={"ID":"7529a8b3-1901-4ac4-9cee-f3ece4581ea8","Type":"ContainerDied","Data":"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b"} Jan 22 14:10:56 crc kubenswrapper[4769]: I0122 14:10:56.933446 4769 scope.go:117] "RemoveContainer" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:10:57 crc kubenswrapper[4769]: I0122 14:10:57.462234 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tbnjt_must-gather-nlc24_7529a8b3-1901-4ac4-9cee-f3ece4581ea8/gather/0.log" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.535629 4769 scope.go:117] "RemoveContainer" containerID="76ee9e3f92bd4b52916160b7315f6f1bcae498478a919fab65490233e1c3a657" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.561611 4769 scope.go:117] "RemoveContainer" containerID="41ccd1233986e7a4c125219fe7adea8a9635992e6e64e942e038414ae80cde80" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.613833 4769 scope.go:117] "RemoveContainer" containerID="97b2836a40fe3718dc9876ac751e671d98460d0371e12f643bc7ac498b12c4d8" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.648585 4769 scope.go:117] "RemoveContainer" containerID="8c802b2b696d681ed9980b953b8105bed5cefd906bb042dcf0b8c4943c91185b" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.704376 4769 scope.go:117] "RemoveContainer" containerID="fb2e3c339083927502fb6cea262472f4288b04764f08eec3cbd1e7e2b61cc67d" Jan 22 14:10:59 crc kubenswrapper[4769]: I0122 14:10:59.727912 4769 scope.go:117] "RemoveContainer" containerID="c074e42ca3ff188c7761b8f55de35192aed9fef36fdef20a8193ec2013468312" Jan 22 14:11:03 crc kubenswrapper[4769]: I0122 14:11:03.032865 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:11:03 crc kubenswrapper[4769]: I0122 14:11:03.042363 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-trlj5"] Jan 22 14:11:04 crc kubenswrapper[4769]: I0122 14:11:04.885151 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:04 crc kubenswrapper[4769]: E0122 14:11:04.888334 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:04 crc kubenswrapper[4769]: I0122 14:11:04.899653 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4521e7ce-1245-4a18-9179-83a2b288e227" path="/var/lib/kubelet/pods/4521e7ce-1245-4a18-9179-83a2b288e227/volumes" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.180620 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.181302 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tbnjt/must-gather-nlc24" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerName="copy" containerID="cri-o://1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" gracePeriod=2 Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.189896 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tbnjt/must-gather-nlc24"] Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.603744 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tbnjt_must-gather-nlc24_7529a8b3-1901-4ac4-9cee-f3ece4581ea8/copy/0.log" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.604518 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.654251 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") pod \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.654321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") pod \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\" (UID: \"7529a8b3-1901-4ac4-9cee-f3ece4581ea8\") " Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.660288 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc" (OuterVolumeSpecName: "kube-api-access-v94jc") pod "7529a8b3-1901-4ac4-9cee-f3ece4581ea8" (UID: "7529a8b3-1901-4ac4-9cee-f3ece4581ea8"). InnerVolumeSpecName "kube-api-access-v94jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.756388 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v94jc\" (UniqueName: \"kubernetes.io/projected/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-kube-api-access-v94jc\") on node \"crc\" DevicePath \"\"" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.831836 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7529a8b3-1901-4ac4-9cee-f3ece4581ea8" (UID: "7529a8b3-1901-4ac4-9cee-f3ece4581ea8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 14:11:05 crc kubenswrapper[4769]: I0122 14:11:05.858027 4769 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7529a8b3-1901-4ac4-9cee-f3ece4581ea8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.020733 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tbnjt_must-gather-nlc24_7529a8b3-1901-4ac4-9cee-f3ece4581ea8/copy/0.log" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.021275 4769 generic.go:334] "Generic (PLEG): container finished" podID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" exitCode=143 Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.021346 4769 scope.go:117] "RemoveContainer" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.021348 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tbnjt/must-gather-nlc24" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.044080 4769 scope.go:117] "RemoveContainer" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.130170 4769 scope.go:117] "RemoveContainer" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" Jan 22 14:11:06 crc kubenswrapper[4769]: E0122 14:11:06.130958 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b\": container with ID starting with 1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b not found: ID does not exist" containerID="1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.131010 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b"} err="failed to get container status \"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b\": rpc error: code = NotFound desc = could not find container \"1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b\": container with ID starting with 1dc63ef307cab4453f502e73b5f525685fd266557500aa01a5c30784d48c028b not found: ID does not exist" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.131042 4769 scope.go:117] "RemoveContainer" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:11:06 crc kubenswrapper[4769]: E0122 14:11:06.131574 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b\": container with ID starting with cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b not found: ID does not exist" containerID="cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.131605 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b"} err="failed to get container status \"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b\": rpc error: code = NotFound desc = could not find container \"cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b\": container with ID starting with cd35217481d81f29cfb74abcbd43b14ccfe181f633147cf4756bf7bb55d0937b not found: ID does not exist" Jan 22 14:11:06 crc kubenswrapper[4769]: I0122 14:11:06.895580 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7529a8b3-1901-4ac4-9cee-f3ece4581ea8" path="/var/lib/kubelet/pods/7529a8b3-1901-4ac4-9cee-f3ece4581ea8/volumes" Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.037382 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.047830 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.056203 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-5nx2t"] Jan 22 14:11:07 crc kubenswrapper[4769]: I0122 14:11:07.062977 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7r9tp"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.028145 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.038502 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.047780 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-24cb-account-create-update-rtdf4"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.056155 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-892lk"] Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.897158 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d72603e-a10a-4490-8298-67db64d087fc" path="/var/lib/kubelet/pods/3d72603e-a10a-4490-8298-67db64d087fc/volumes" Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.898479 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0702a4-ee8a-45da-9cb7-40c2e4b257b9" path="/var/lib/kubelet/pods/ad0702a4-ee8a-45da-9cb7-40c2e4b257b9/volumes" Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.899323 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0" path="/var/lib/kubelet/pods/ae6aa3a5-37b1-42b7-9bac-b861b2d47bb0/volumes" Jan 22 14:11:08 crc kubenswrapper[4769]: I0122 14:11:08.900139 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb68cb3e-c079-4e87-ae9b-be93a2b8b80e" path="/var/lib/kubelet/pods/cb68cb3e-c079-4e87-ae9b-be93a2b8b80e/volumes" Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.028176 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.035817 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.042951 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-8372-account-create-update-lq4fn"] Jan 22 14:11:11 crc kubenswrapper[4769]: I0122 14:11:11.050432 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8bb3-account-create-update-x6jhs"] Jan 22 14:11:12 crc kubenswrapper[4769]: I0122 14:11:12.900482 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e2f7fd-cd2e-4a84-b62a-27915d32469c" path="/var/lib/kubelet/pods/51e2f7fd-cd2e-4a84-b62a-27915d32469c/volumes" Jan 22 14:11:12 crc kubenswrapper[4769]: I0122 14:11:12.902159 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec90402f-c994-4710-b82f-5c8cc3f12fdf" path="/var/lib/kubelet/pods/ec90402f-c994-4710-b82f-5c8cc3f12fdf/volumes" Jan 22 14:11:16 crc kubenswrapper[4769]: I0122 14:11:16.048527 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:11:16 crc kubenswrapper[4769]: I0122 14:11:16.059095 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r7c9w"] Jan 22 14:11:16 crc kubenswrapper[4769]: I0122 14:11:16.900115 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="275c0c66-cbd1-4469-81f6-c33a1eab0ed6" path="/var/lib/kubelet/pods/275c0c66-cbd1-4469-81f6-c33a1eab0ed6/volumes" Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.034263 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.043260 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-t9sxw"] Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.883515 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:18 crc kubenswrapper[4769]: E0122 14:11:18.884008 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:18 crc kubenswrapper[4769]: I0122 14:11:18.898672 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299" path="/var/lib/kubelet/pods/b4b4ca8a-8b9e-48d2-9208-fecb2bc9a299/volumes" Jan 22 14:11:33 crc kubenswrapper[4769]: I0122 14:11:33.883504 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:33 crc kubenswrapper[4769]: E0122 14:11:33.884480 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:47 crc kubenswrapper[4769]: I0122 14:11:47.883850 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:47 crc kubenswrapper[4769]: E0122 14:11:47.884760 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:58 crc kubenswrapper[4769]: I0122 14:11:58.883693 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:11:58 crc kubenswrapper[4769]: E0122 14:11:58.899505 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:11:59 crc kubenswrapper[4769]: I0122 14:11:59.869581 4769 scope.go:117] "RemoveContainer" containerID="52648bb4b661a8c6c50f29dcbb2e628521c76a98f4664eeeaa26623f333c78ee" Jan 22 14:11:59 crc kubenswrapper[4769]: I0122 14:11:59.901037 4769 scope.go:117] "RemoveContainer" containerID="9adc3b6e5ed26c0015ab034169ba62530ada71abb392698e2ee878b4e52729c9" Jan 22 14:11:59 crc kubenswrapper[4769]: I0122 14:11:59.968232 4769 scope.go:117] "RemoveContainer" containerID="21355f679d3807ef130aaa327e0801fb4ef81abe61c9581a47edf5ff6be44534" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.024186 4769 scope.go:117] "RemoveContainer" containerID="3fff52ca9914171d818af9485b605a038595dddbd005e73b62529f4a697aa6bd" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.079527 4769 scope.go:117] "RemoveContainer" containerID="a23fe7e1f609804bd01eaf3b67aa868ecc07d3bf005fc4cf04bf270bb0eb13a4" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.113534 4769 scope.go:117] "RemoveContainer" containerID="77def06c9daefb086f0355ee46072f20bab89a75ed5e0bf4dc001c469ff25434" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.143376 4769 scope.go:117] "RemoveContainer" containerID="afe20a822b4f3e3d56773006d4aeb9478417b77dbf27f9940cbd13b2576b2dc2" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.163175 4769 scope.go:117] "RemoveContainer" containerID="61d9e5ec964872c1028545493f0b6a3c6f57bd0bc24e83e376180164d65cbfb4" Jan 22 14:12:00 crc kubenswrapper[4769]: I0122 14:12:00.189769 4769 scope.go:117] "RemoveContainer" containerID="09178c7f0f25de3bb2d0040621da54e6d9636a7e539ca3291149727833705d8f" Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.061379 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.071919 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.085200 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nv6tp"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.093500 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bjdj8"] Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.895983 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b938618-acdf-4f5f-8a04-daabc17cbb0c" path="/var/lib/kubelet/pods/4b938618-acdf-4f5f-8a04-daabc17cbb0c/volumes" Jan 22 14:12:06 crc kubenswrapper[4769]: I0122 14:12:06.896643 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e92228-1a9b-49fc-9dfd-0493f70f5ee8" path="/var/lib/kubelet/pods/a0e92228-1a9b-49fc-9dfd-0493f70f5ee8/volumes" Jan 22 14:12:13 crc kubenswrapper[4769]: I0122 14:12:13.884732 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:13 crc kubenswrapper[4769]: E0122 14:12:13.886147 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:12:14 crc kubenswrapper[4769]: I0122 14:12:14.050207 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:12:14 crc kubenswrapper[4769]: I0122 14:12:14.057282 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rqjpw"] Jan 22 14:12:14 crc kubenswrapper[4769]: I0122 14:12:14.898358 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c0ef06-5806-418c-8a10-81ea6afb0401" path="/var/lib/kubelet/pods/f7c0ef06-5806-418c-8a10-81ea6afb0401/volumes" Jan 22 14:12:24 crc kubenswrapper[4769]: I0122 14:12:24.031970 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:12:24 crc kubenswrapper[4769]: I0122 14:12:24.039219 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-zzjpd"] Jan 22 14:12:24 crc kubenswrapper[4769]: I0122 14:12:24.894186 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f766e1-262c-4861-a117-2454631e284f" path="/var/lib/kubelet/pods/a7f766e1-262c-4861-a117-2454631e284f/volumes" Jan 22 14:12:25 crc kubenswrapper[4769]: I0122 14:12:25.035720 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:12:25 crc kubenswrapper[4769]: I0122 14:12:25.048701 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-l4hnw"] Jan 22 14:12:26 crc kubenswrapper[4769]: I0122 14:12:26.896212 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb8819f-512d-43d8-a59e-1ba8e7e1fb06" path="/var/lib/kubelet/pods/3eb8819f-512d-43d8-a59e-1ba8e7e1fb06/volumes" Jan 22 14:12:28 crc kubenswrapper[4769]: I0122 14:12:28.883429 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:28 crc kubenswrapper[4769]: E0122 14:12:28.884090 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:12:39 crc kubenswrapper[4769]: I0122 14:12:39.883329 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:39 crc kubenswrapper[4769]: E0122 14:12:39.884092 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:12:51 crc kubenswrapper[4769]: I0122 14:12:51.883825 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:12:51 crc kubenswrapper[4769]: E0122 14:12:51.884691 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.048228 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.055650 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fllmn"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.065918 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.074457 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-264d-account-create-update-4z8cb"] Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.385891 4769 scope.go:117] "RemoveContainer" containerID="fe625d5ef022f97b15014934b8ace95f1c730255ffa2604dde5ccc072b731811" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.423160 4769 scope.go:117] "RemoveContainer" containerID="7f8570350656236f2df14cf1385749f2acad79acf56a71c03ae5fb37c7ed236c" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.470770 4769 scope.go:117] "RemoveContainer" containerID="5e70825bce9fda82996c69d7184b5c0089e4b77074cca5f87821576c29bc3590" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.564277 4769 scope.go:117] "RemoveContainer" containerID="6a9857699ee5a25dcfbbfd97a9806c7b0bc9c1947fe854676a7dd2547f60a656" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.591675 4769 scope.go:117] "RemoveContainer" containerID="3c1a07b1b0fdcc85ff1215b6b0ffc50eb270b562fc9ca8873d111f3b05220e1b" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.666143 4769 scope.go:117] "RemoveContainer" containerID="4814c2687ce225a42dac55f4070477c0bf4c2e838fc60d85c396b3c0a24f2c9c" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.898084 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecb8a996-384c-4155-b45d-6a6335165545" path="/var/lib/kubelet/pods/ecb8a996-384c-4155-b45d-6a6335165545/volumes" Jan 22 14:13:00 crc kubenswrapper[4769]: I0122 14:13:00.899218 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe68065a-9702-4440-a09a-2698d21ad5cc" path="/var/lib/kubelet/pods/fe68065a-9702-4440-a09a-2698d21ad5cc/volumes" Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.037046 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.052836 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.059967 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.067565 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.075083 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-49d8-account-create-update-gnbhc"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.081657 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tx7mp"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.088228 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ddb8-account-create-update-zm48k"] Jan 22 14:13:01 crc kubenswrapper[4769]: I0122 14:13:01.094596 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5t26t"] Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.913657 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="288566dc-b78e-46e4-9bd3-c61bc9c2a6ce" path="/var/lib/kubelet/pods/288566dc-b78e-46e4-9bd3-c61bc9c2a6ce/volumes" Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.915155 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33b7a35-52b8-47c6-b5a7-5cf87d838927" path="/var/lib/kubelet/pods/b33b7a35-52b8-47c6-b5a7-5cf87d838927/volumes" Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.915942 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcc2db5-9739-4e49-a6cc-3f7aff70f97d" path="/var/lib/kubelet/pods/cdcc2db5-9739-4e49-a6cc-3f7aff70f97d/volumes" Jan 22 14:13:02 crc kubenswrapper[4769]: I0122 14:13:02.916735 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45f7c9a-23a2-40fe-80dc-305f1fbc8e17" path="/var/lib/kubelet/pods/e45f7c9a-23a2-40fe-80dc-305f1fbc8e17/volumes" Jan 22 14:13:03 crc kubenswrapper[4769]: I0122 14:13:03.883340 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:03 crc kubenswrapper[4769]: E0122 14:13:03.883627 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:17 crc kubenswrapper[4769]: I0122 14:13:17.883850 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:17 crc kubenswrapper[4769]: E0122 14:13:17.884628 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:29 crc kubenswrapper[4769]: I0122 14:13:29.884743 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:29 crc kubenswrapper[4769]: E0122 14:13:29.885583 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:30 crc kubenswrapper[4769]: I0122 14:13:30.046050 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:13:30 crc kubenswrapper[4769]: I0122 14:13:30.053900 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-hql94"] Jan 22 14:13:30 crc kubenswrapper[4769]: I0122 14:13:30.894451 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf" path="/var/lib/kubelet/pods/4c62b2d6-1d5c-40f1-ac1a-42cd36f0c4cf/volumes" Jan 22 14:13:42 crc kubenswrapper[4769]: I0122 14:13:42.884472 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:42 crc kubenswrapper[4769]: E0122 14:13:42.885748 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:55 crc kubenswrapper[4769]: I0122 14:13:55.047798 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:13:55 crc kubenswrapper[4769]: I0122 14:13:55.059745 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6vgx7"] Jan 22 14:13:56 crc kubenswrapper[4769]: I0122 14:13:56.884276 4769 scope.go:117] "RemoveContainer" containerID="e7bae61700e44833a742aeb47b10d93605fc854e6fc9c589859554c1c0a5b135" Jan 22 14:13:56 crc kubenswrapper[4769]: E0122 14:13:56.884847 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-hwhw7_openshift-machine-config-operator(f0af8746-c9f0-48e6-8a60-02fed286b419)\"" pod="openshift-machine-config-operator/machine-config-daemon-hwhw7" podUID="f0af8746-c9f0-48e6-8a60-02fed286b419" Jan 22 14:13:56 crc kubenswrapper[4769]: I0122 14:13:56.895556 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3137766d-8b45-47a0-a7ca-f1a3c381450d" path="/var/lib/kubelet/pods/3137766d-8b45-47a0-a7ca-f1a3c381450d/volumes" Jan 22 14:13:57 crc kubenswrapper[4769]: I0122 14:13:57.048181 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:13:57 crc kubenswrapper[4769]: I0122 14:13:57.058883 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cg5m6"] Jan 22 14:13:58 crc kubenswrapper[4769]: I0122 14:13:58.899694 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60fa7062-c4e9-4700-88e1-af5262989c6f" path="/var/lib/kubelet/pods/60fa7062-c4e9-4700-88e1-af5262989c6f/volumes" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.812986 4769 scope.go:117] "RemoveContainer" containerID="7c716f4cbcf6f24dd054838f2140dd17dfc86e227f15ff8751421f1115943a30" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.855876 4769 scope.go:117] "RemoveContainer" containerID="35419b0caadf70dae858a9997b2843ac8c049f423da3e9c017409f33d3f2290e" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.891635 4769 scope.go:117] "RemoveContainer" containerID="afb16cda8136e3c60a4cc4eee0a34fec39387efd7fcb1e371afcd2d6220a3675" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.951656 4769 scope.go:117] "RemoveContainer" containerID="98cf78384a8d16885b92b730a74a3979d2ab97411451096f63dae1f0143aa7f4" Jan 22 14:14:00 crc kubenswrapper[4769]: I0122 14:14:00.969981 4769 scope.go:117] "RemoveContainer" containerID="18279fc40052f609766481b086ba6db177d4033484da61ddaf6b1e3ccb376090" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.037118 4769 scope.go:117] "RemoveContainer" containerID="5bf2e7be98fe42d0c15cb0b41bd3e6c08f22798c04acc10db52946a1a04187f4" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.072513 4769 scope.go:117] "RemoveContainer" containerID="b968152c0d0005bd0bae6dd12531f4e3ac4944479a46e411981d500bf6e21a03" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.105731 4769 scope.go:117] "RemoveContainer" containerID="be7b8f38b3fcc55abca045ec63342b69733efd9d1dc30413ccf64f860152d0b1" Jan 22 14:14:01 crc kubenswrapper[4769]: I0122 14:14:01.122853 4769 scope.go:117] "RemoveContainer" containerID="751475c8a4f373e18f772a466e3903901a4fe7bb3bad0aaf09ffde9f52db0d97"